00:00:00.001 Started by upstream project "autotest-per-patch" build number 132062 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.019 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.020 The recommended git tool is: git 00:00:00.020 using credential 00000000-0000-0000-0000-000000000002 00:00:00.022 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.040 Fetching changes from the remote Git repository 00:00:00.042 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.066 Using shallow fetch with depth 1 00:00:00.067 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.067 > git --version # timeout=10 00:00:00.103 > git --version # 'git version 2.39.2' 00:00:00.103 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.148 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.148 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.649 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.659 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.671 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:02.671 > git config core.sparsecheckout # timeout=10 00:00:02.682 > git read-tree -mu HEAD # timeout=10 00:00:02.697 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:02.712 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:02.712 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:02.820 [Pipeline] Start of Pipeline 00:00:02.830 [Pipeline] library 00:00:02.831 Loading library shm_lib@master 00:00:02.831 Library shm_lib@master is cached. Copying from home. 00:00:02.845 [Pipeline] node 00:00:02.854 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:02.855 [Pipeline] { 00:00:02.862 [Pipeline] catchError 00:00:02.863 [Pipeline] { 00:00:02.874 [Pipeline] wrap 00:00:02.881 [Pipeline] { 00:00:02.890 [Pipeline] stage 00:00:02.891 [Pipeline] { (Prologue) 00:00:02.909 [Pipeline] echo 00:00:02.911 Node: VM-host-WFP7 00:00:02.917 [Pipeline] cleanWs 00:00:02.927 [WS-CLEANUP] Deleting project workspace... 00:00:02.927 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.933 [WS-CLEANUP] done 00:00:03.130 [Pipeline] setCustomBuildProperty 00:00:03.219 [Pipeline] httpRequest 00:00:03.590 [Pipeline] echo 00:00:03.591 Sorcerer 10.211.164.101 is alive 00:00:03.600 [Pipeline] retry 00:00:03.603 [Pipeline] { 00:00:03.616 [Pipeline] httpRequest 00:00:03.621 HttpMethod: GET 00:00:03.621 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.622 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.626 Response Code: HTTP/1.1 200 OK 00:00:03.627 Success: Status code 200 is in the accepted range: 200,404 00:00:03.627 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.634 [Pipeline] } 00:00:12.651 [Pipeline] // retry 00:00:12.659 [Pipeline] sh 00:00:12.944 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.960 [Pipeline] httpRequest 00:00:13.365 [Pipeline] echo 00:00:13.367 Sorcerer 10.211.164.101 is alive 00:00:13.376 [Pipeline] retry 00:00:13.378 [Pipeline] { 00:00:13.393 [Pipeline] httpRequest 00:00:13.398 HttpMethod: GET 00:00:13.399 URL: http://10.211.164.101/packages/spdk_1aeff8917b2f794105f6695e771cf5d68f6d7ab5.tar.gz 00:00:13.399 Sending request to url: http://10.211.164.101/packages/spdk_1aeff8917b2f794105f6695e771cf5d68f6d7ab5.tar.gz 00:00:13.405 Response Code: HTTP/1.1 200 OK 00:00:13.405 Success: Status code 200 is in the accepted range: 200,404 00:00:13.406 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_1aeff8917b2f794105f6695e771cf5d68f6d7ab5.tar.gz 00:01:56.002 [Pipeline] } 00:01:56.021 [Pipeline] // retry 00:01:56.029 [Pipeline] sh 00:01:56.311 + tar --no-same-owner -xf spdk_1aeff8917b2f794105f6695e771cf5d68f6d7ab5.tar.gz 00:01:58.865 [Pipeline] sh 00:01:59.151 + git -C spdk log --oneline -n5 00:01:59.151 1aeff8917 lib/reduce: Add a chunk data read/write cache 00:01:59.151 fa3ab7384 bdev/raid: Fix raid_bdev->sb null pointer 00:01:59.151 12fc2abf1 test: Remove autopackage.sh 00:01:59.151 83ba90867 fio/bdev: fix typo in README 00:01:59.151 45379ed84 module/compress: Cleanup vol data, when claim fails 00:01:59.175 [Pipeline] writeFile 00:01:59.190 [Pipeline] sh 00:01:59.475 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:59.488 [Pipeline] sh 00:01:59.770 + cat autorun-spdk.conf 00:01:59.770 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.770 SPDK_RUN_ASAN=1 00:01:59.770 SPDK_RUN_UBSAN=1 00:01:59.770 SPDK_TEST_RAID=1 00:01:59.770 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.779 RUN_NIGHTLY=0 00:01:59.781 [Pipeline] } 00:01:59.795 [Pipeline] // stage 00:01:59.814 [Pipeline] stage 00:01:59.816 [Pipeline] { (Run VM) 00:01:59.830 [Pipeline] sh 00:02:00.115 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:00.115 + echo 'Start stage prepare_nvme.sh' 00:02:00.115 Start stage prepare_nvme.sh 00:02:00.115 + [[ -n 3 ]] 00:02:00.115 + disk_prefix=ex3 00:02:00.115 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:02:00.115 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:02:00.115 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:02:00.115 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.115 ++ SPDK_RUN_ASAN=1 00:02:00.115 ++ SPDK_RUN_UBSAN=1 00:02:00.115 ++ SPDK_TEST_RAID=1 00:02:00.115 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.115 ++ RUN_NIGHTLY=0 00:02:00.115 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:02:00.115 + nvme_files=() 00:02:00.115 + declare -A nvme_files 00:02:00.115 + backend_dir=/var/lib/libvirt/images/backends 00:02:00.115 + nvme_files['nvme.img']=5G 00:02:00.115 + nvme_files['nvme-cmb.img']=5G 00:02:00.115 + nvme_files['nvme-multi0.img']=4G 00:02:00.115 + nvme_files['nvme-multi1.img']=4G 00:02:00.115 + nvme_files['nvme-multi2.img']=4G 00:02:00.115 + nvme_files['nvme-openstack.img']=8G 00:02:00.115 + nvme_files['nvme-zns.img']=5G 00:02:00.115 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:00.115 + (( SPDK_TEST_FTL == 1 )) 00:02:00.115 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:00.115 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:00.115 + for nvme in "${!nvme_files[@]}" 00:02:00.115 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:02:00.115 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.115 + for nvme in "${!nvme_files[@]}" 00:02:00.115 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:02:00.115 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.115 + for nvme in "${!nvme_files[@]}" 00:02:00.115 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:02:00.115 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:00.115 + for nvme in "${!nvme_files[@]}" 00:02:00.115 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:02:00.115 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.115 + for nvme in "${!nvme_files[@]}" 00:02:00.115 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:02:00.115 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.115 + for nvme in "${!nvme_files[@]}" 00:02:00.115 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:02:00.115 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.115 + for nvme in "${!nvme_files[@]}" 00:02:00.115 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:02:01.052 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:01.052 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:02:01.052 + echo 'End stage prepare_nvme.sh' 00:02:01.052 End stage prepare_nvme.sh 00:02:01.064 [Pipeline] sh 00:02:01.345 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:01.346 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:02:01.346 00:02:01.346 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:02:01.346 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:02:01.346 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:02:01.346 HELP=0 00:02:01.346 DRY_RUN=0 00:02:01.346 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:02:01.346 NVME_DISKS_TYPE=nvme,nvme, 00:02:01.346 NVME_AUTO_CREATE=0 00:02:01.346 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:02:01.346 NVME_CMB=,, 00:02:01.346 NVME_PMR=,, 00:02:01.346 NVME_ZNS=,, 00:02:01.346 NVME_MS=,, 00:02:01.346 NVME_FDP=,, 00:02:01.346 SPDK_VAGRANT_DISTRO=fedora39 00:02:01.346 SPDK_VAGRANT_VMCPU=10 00:02:01.346 SPDK_VAGRANT_VMRAM=12288 00:02:01.346 SPDK_VAGRANT_PROVIDER=libvirt 00:02:01.346 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:01.346 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:01.346 SPDK_OPENSTACK_NETWORK=0 00:02:01.346 VAGRANT_PACKAGE_BOX=0 00:02:01.346 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:02:01.346 FORCE_DISTRO=true 00:02:01.346 VAGRANT_BOX_VERSION= 00:02:01.346 EXTRA_VAGRANTFILES= 00:02:01.346 NIC_MODEL=virtio 00:02:01.346 00:02:01.346 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:02:01.346 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:02:03.956 Bringing machine 'default' up with 'libvirt' provider... 00:02:04.215 ==> default: Creating image (snapshot of base box volume). 00:02:04.475 ==> default: Creating domain with the following settings... 00:02:04.475 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730805483_486b685139b6c7b333d8 00:02:04.475 ==> default: -- Domain type: kvm 00:02:04.475 ==> default: -- Cpus: 10 00:02:04.475 ==> default: -- Feature: acpi 00:02:04.475 ==> default: -- Feature: apic 00:02:04.475 ==> default: -- Feature: pae 00:02:04.475 ==> default: -- Memory: 12288M 00:02:04.475 ==> default: -- Memory Backing: hugepages: 00:02:04.475 ==> default: -- Management MAC: 00:02:04.475 ==> default: -- Loader: 00:02:04.476 ==> default: -- Nvram: 00:02:04.476 ==> default: -- Base box: spdk/fedora39 00:02:04.476 ==> default: -- Storage pool: default 00:02:04.476 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730805483_486b685139b6c7b333d8.img (20G) 00:02:04.476 ==> default: -- Volume Cache: default 00:02:04.476 ==> default: -- Kernel: 00:02:04.476 ==> default: -- Initrd: 00:02:04.476 ==> default: -- Graphics Type: vnc 00:02:04.476 ==> default: -- Graphics Port: -1 00:02:04.476 ==> default: -- Graphics IP: 127.0.0.1 00:02:04.476 ==> default: -- Graphics Password: Not defined 00:02:04.476 ==> default: -- Video Type: cirrus 00:02:04.476 ==> default: -- Video VRAM: 9216 00:02:04.476 ==> default: -- Sound Type: 00:02:04.476 ==> default: -- Keymap: en-us 00:02:04.476 ==> default: -- TPM Path: 00:02:04.476 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:04.476 ==> default: -- Command line args: 00:02:04.476 ==> default: -> value=-device, 00:02:04.476 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:04.476 ==> default: -> value=-drive, 00:02:04.476 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:02:04.476 ==> default: -> value=-device, 00:02:04.476 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.476 ==> default: -> value=-device, 00:02:04.476 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:04.476 ==> default: -> value=-drive, 00:02:04.476 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:04.476 ==> default: -> value=-device, 00:02:04.476 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.476 ==> default: -> value=-drive, 00:02:04.476 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:04.476 ==> default: -> value=-device, 00:02:04.476 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.476 ==> default: -> value=-drive, 00:02:04.476 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:04.476 ==> default: -> value=-device, 00:02:04.476 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.736 ==> default: Creating shared folders metadata... 00:02:04.736 ==> default: Starting domain. 00:02:06.118 ==> default: Waiting for domain to get an IP address... 00:02:24.220 ==> default: Waiting for SSH to become available... 00:02:24.220 ==> default: Configuring and enabling network interfaces... 00:02:30.785 default: SSH address: 192.168.121.171:22 00:02:30.785 default: SSH username: vagrant 00:02:30.785 default: SSH auth method: private key 00:02:33.319 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:41.467 ==> default: Mounting SSHFS shared folder... 00:02:43.375 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:43.375 ==> default: Checking Mount.. 00:02:44.756 ==> default: Folder Successfully Mounted! 00:02:44.756 ==> default: Running provisioner: file... 00:02:45.695 default: ~/.gitconfig => .gitconfig 00:02:46.263 00:02:46.263 SUCCESS! 00:02:46.263 00:02:46.263 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:02:46.263 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:46.263 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:02:46.263 00:02:46.273 [Pipeline] } 00:02:46.289 [Pipeline] // stage 00:02:46.298 [Pipeline] dir 00:02:46.299 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:02:46.301 [Pipeline] { 00:02:46.313 [Pipeline] catchError 00:02:46.315 [Pipeline] { 00:02:46.328 [Pipeline] sh 00:02:46.649 + vagrant ssh-config --host vagrant 00:02:46.649 + sed -ne /^Host/,$p+ 00:02:46.649 tee ssh_conf 00:02:49.938 Host vagrant 00:02:49.938 HostName 192.168.121.171 00:02:49.938 User vagrant 00:02:49.938 Port 22 00:02:49.938 UserKnownHostsFile /dev/null 00:02:49.938 StrictHostKeyChecking no 00:02:49.938 PasswordAuthentication no 00:02:49.938 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:49.938 IdentitiesOnly yes 00:02:49.938 LogLevel FATAL 00:02:49.938 ForwardAgent yes 00:02:49.938 ForwardX11 yes 00:02:49.938 00:02:49.952 [Pipeline] withEnv 00:02:49.954 [Pipeline] { 00:02:49.968 [Pipeline] sh 00:02:50.248 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:50.248 source /etc/os-release 00:02:50.248 [[ -e /image.version ]] && img=$(< /image.version) 00:02:50.248 # Minimal, systemd-like check. 00:02:50.248 if [[ -e /.dockerenv ]]; then 00:02:50.248 # Clear garbage from the node's name: 00:02:50.248 # agt-er_autotest_547-896 -> autotest_547-896 00:02:50.248 # $HOSTNAME is the actual container id 00:02:50.248 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:50.248 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:50.248 # We can assume this is a mount from a host where container is running, 00:02:50.248 # so fetch its hostname to easily identify the target swarm worker. 00:02:50.248 container="$(< /etc/hostname) ($agent)" 00:02:50.248 else 00:02:50.248 # Fallback 00:02:50.248 container=$agent 00:02:50.248 fi 00:02:50.248 fi 00:02:50.248 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:50.248 00:02:50.518 [Pipeline] } 00:02:50.532 [Pipeline] // withEnv 00:02:50.541 [Pipeline] setCustomBuildProperty 00:02:50.554 [Pipeline] stage 00:02:50.557 [Pipeline] { (Tests) 00:02:50.573 [Pipeline] sh 00:02:50.855 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:51.124 [Pipeline] sh 00:02:51.404 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:51.677 [Pipeline] timeout 00:02:51.678 Timeout set to expire in 1 hr 30 min 00:02:51.680 [Pipeline] { 00:02:51.694 [Pipeline] sh 00:02:51.976 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:52.545 HEAD is now at 1aeff8917 lib/reduce: Add a chunk data read/write cache 00:02:52.558 [Pipeline] sh 00:02:52.839 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:53.110 [Pipeline] sh 00:02:53.390 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:53.699 [Pipeline] sh 00:02:53.977 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:54.236 ++ readlink -f spdk_repo 00:02:54.236 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:54.236 + [[ -n /home/vagrant/spdk_repo ]] 00:02:54.236 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:54.236 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:54.236 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:54.236 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:54.236 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:54.236 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:54.236 + cd /home/vagrant/spdk_repo 00:02:54.236 + source /etc/os-release 00:02:54.236 ++ NAME='Fedora Linux' 00:02:54.236 ++ VERSION='39 (Cloud Edition)' 00:02:54.236 ++ ID=fedora 00:02:54.236 ++ VERSION_ID=39 00:02:54.236 ++ VERSION_CODENAME= 00:02:54.236 ++ PLATFORM_ID=platform:f39 00:02:54.236 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:54.236 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:54.236 ++ LOGO=fedora-logo-icon 00:02:54.236 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:54.236 ++ HOME_URL=https://fedoraproject.org/ 00:02:54.236 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:54.236 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:54.236 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:54.236 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:54.236 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:54.236 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:54.236 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:54.236 ++ SUPPORT_END=2024-11-12 00:02:54.236 ++ VARIANT='Cloud Edition' 00:02:54.236 ++ VARIANT_ID=cloud 00:02:54.236 + uname -a 00:02:54.236 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:54.236 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:54.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:54.804 Hugepages 00:02:54.804 node hugesize free / total 00:02:54.804 node0 1048576kB 0 / 0 00:02:54.804 node0 2048kB 0 / 0 00:02:54.804 00:02:54.804 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:54.804 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:54.804 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:54.804 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:54.804 + rm -f /tmp/spdk-ld-path 00:02:54.805 + source autorun-spdk.conf 00:02:54.805 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:54.805 ++ SPDK_RUN_ASAN=1 00:02:54.805 ++ SPDK_RUN_UBSAN=1 00:02:54.805 ++ SPDK_TEST_RAID=1 00:02:54.805 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:54.805 ++ RUN_NIGHTLY=0 00:02:54.805 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:54.805 + [[ -n '' ]] 00:02:54.805 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:54.805 + for M in /var/spdk/build-*-manifest.txt 00:02:54.805 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:54.805 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:54.805 + for M in /var/spdk/build-*-manifest.txt 00:02:54.805 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:54.805 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:54.805 + for M in /var/spdk/build-*-manifest.txt 00:02:54.805 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:54.805 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:54.805 ++ uname 00:02:54.805 + [[ Linux == \L\i\n\u\x ]] 00:02:54.805 + sudo dmesg -T 00:02:55.064 + sudo dmesg --clear 00:02:55.064 + dmesg_pid=5423 00:02:55.064 + [[ Fedora Linux == FreeBSD ]] 00:02:55.064 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:55.064 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:55.064 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:55.064 + sudo dmesg -Tw 00:02:55.064 + [[ -x /usr/src/fio-static/fio ]] 00:02:55.064 + export FIO_BIN=/usr/src/fio-static/fio 00:02:55.064 + FIO_BIN=/usr/src/fio-static/fio 00:02:55.064 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:55.064 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:55.064 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:55.064 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:55.064 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:55.064 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:55.064 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:55.064 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:55.064 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:55.064 11:18:54 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:55.064 11:18:54 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:55.064 11:18:54 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:55.064 11:18:54 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:55.064 11:18:54 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:55.064 11:18:54 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:55.064 11:18:54 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:55.064 11:18:54 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:02:55.064 11:18:54 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:55.064 11:18:54 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:55.064 11:18:54 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:55.064 11:18:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:55.064 11:18:54 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:55.065 11:18:54 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:55.065 11:18:54 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:55.065 11:18:54 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:55.065 11:18:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.065 11:18:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.065 11:18:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.065 11:18:54 -- paths/export.sh@5 -- $ export PATH 00:02:55.065 11:18:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.065 11:18:54 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:55.065 11:18:54 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:55.065 11:18:54 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730805534.XXXXXX 00:02:55.065 11:18:54 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730805534.ApsN4s 00:02:55.065 11:18:54 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:55.065 11:18:54 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:55.065 11:18:54 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:55.065 11:18:54 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:55.065 11:18:54 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:55.065 11:18:54 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:55.065 11:18:54 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:55.065 11:18:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.324 11:18:54 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:55.324 11:18:54 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:55.324 11:18:54 -- pm/common@17 -- $ local monitor 00:02:55.324 11:18:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.324 11:18:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.324 11:18:54 -- pm/common@21 -- $ date +%s 00:02:55.324 11:18:54 -- pm/common@25 -- $ sleep 1 00:02:55.324 11:18:54 -- pm/common@21 -- $ date +%s 00:02:55.324 11:18:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730805534 00:02:55.324 11:18:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730805534 00:02:55.324 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730805534_collect-cpu-load.pm.log 00:02:55.324 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730805534_collect-vmstat.pm.log 00:02:56.262 11:18:55 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:56.262 11:18:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:56.262 11:18:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:56.262 11:18:55 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:56.262 11:18:55 -- spdk/autobuild.sh@16 -- $ date -u 00:02:56.262 Tue Nov 5 11:18:55 AM UTC 2024 00:02:56.262 11:18:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:56.262 v25.01-pre-125-g1aeff8917 00:02:56.262 11:18:55 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:56.262 11:18:55 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:56.262 11:18:55 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:56.262 11:18:55 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:56.262 11:18:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:56.262 ************************************ 00:02:56.262 START TEST asan 00:02:56.262 ************************************ 00:02:56.262 using asan 00:02:56.262 11:18:55 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:56.262 00:02:56.262 real 0m0.000s 00:02:56.262 user 0m0.000s 00:02:56.262 sys 0m0.000s 00:02:56.262 11:18:55 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:56.262 11:18:55 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:56.262 ************************************ 00:02:56.262 END TEST asan 00:02:56.262 ************************************ 00:02:56.262 11:18:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:56.262 11:18:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:56.262 11:18:55 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:56.262 11:18:55 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:56.262 11:18:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:56.262 ************************************ 00:02:56.262 START TEST ubsan 00:02:56.262 ************************************ 00:02:56.262 using ubsan 00:02:56.262 11:18:55 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:56.262 00:02:56.262 real 0m0.000s 00:02:56.262 user 0m0.000s 00:02:56.262 sys 0m0.000s 00:02:56.262 11:18:55 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:56.262 11:18:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:56.262 ************************************ 00:02:56.262 END TEST ubsan 00:02:56.262 ************************************ 00:02:56.262 11:18:55 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:56.262 11:18:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:56.262 11:18:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:56.262 11:18:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:56.262 11:18:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:56.262 11:18:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:56.262 11:18:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:56.262 11:18:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:56.262 11:18:55 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:56.522 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:56.522 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:57.091 Using 'verbs' RDMA provider 00:03:12.936 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:31.034 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:31.034 Creating mk/config.mk...done. 00:03:31.034 Creating mk/cc.flags.mk...done. 00:03:31.034 Type 'make' to build. 00:03:31.034 11:19:27 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:31.034 11:19:27 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:31.034 11:19:27 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:31.034 11:19:27 -- common/autotest_common.sh@10 -- $ set +x 00:03:31.034 ************************************ 00:03:31.034 START TEST make 00:03:31.034 ************************************ 00:03:31.034 11:19:27 make -- common/autotest_common.sh@1127 -- $ make -j10 00:03:31.034 make[1]: Nothing to be done for 'all'. 00:03:41.068 The Meson build system 00:03:41.068 Version: 1.5.0 00:03:41.068 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:41.068 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:41.068 Build type: native build 00:03:41.068 Program cat found: YES (/usr/bin/cat) 00:03:41.068 Project name: DPDK 00:03:41.068 Project version: 24.03.0 00:03:41.068 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:41.068 C linker for the host machine: cc ld.bfd 2.40-14 00:03:41.068 Host machine cpu family: x86_64 00:03:41.068 Host machine cpu: x86_64 00:03:41.068 Message: ## Building in Developer Mode ## 00:03:41.068 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:41.068 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:41.068 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:41.068 Program python3 found: YES (/usr/bin/python3) 00:03:41.068 Program cat found: YES (/usr/bin/cat) 00:03:41.068 Compiler for C supports arguments -march=native: YES 00:03:41.068 Checking for size of "void *" : 8 00:03:41.068 Checking for size of "void *" : 8 (cached) 00:03:41.068 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:41.068 Library m found: YES 00:03:41.068 Library numa found: YES 00:03:41.068 Has header "numaif.h" : YES 00:03:41.068 Library fdt found: NO 00:03:41.068 Library execinfo found: NO 00:03:41.068 Has header "execinfo.h" : YES 00:03:41.068 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:41.068 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:41.068 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:41.068 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:41.068 Run-time dependency openssl found: YES 3.1.1 00:03:41.068 Run-time dependency libpcap found: YES 1.10.4 00:03:41.068 Has header "pcap.h" with dependency libpcap: YES 00:03:41.068 Compiler for C supports arguments -Wcast-qual: YES 00:03:41.068 Compiler for C supports arguments -Wdeprecated: YES 00:03:41.068 Compiler for C supports arguments -Wformat: YES 00:03:41.068 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:41.068 Compiler for C supports arguments -Wformat-security: NO 00:03:41.068 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:41.068 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:41.068 Compiler for C supports arguments -Wnested-externs: YES 00:03:41.068 Compiler for C supports arguments -Wold-style-definition: YES 00:03:41.068 Compiler for C supports arguments -Wpointer-arith: YES 00:03:41.068 Compiler for C supports arguments -Wsign-compare: YES 00:03:41.068 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:41.068 Compiler for C supports arguments -Wundef: YES 00:03:41.068 Compiler for C supports arguments -Wwrite-strings: YES 00:03:41.068 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:41.068 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:41.068 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:41.068 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:41.068 Program objdump found: YES (/usr/bin/objdump) 00:03:41.068 Compiler for C supports arguments -mavx512f: YES 00:03:41.068 Checking if "AVX512 checking" compiles: YES 00:03:41.068 Fetching value of define "__SSE4_2__" : 1 00:03:41.068 Fetching value of define "__AES__" : 1 00:03:41.068 Fetching value of define "__AVX__" : 1 00:03:41.068 Fetching value of define "__AVX2__" : 1 00:03:41.068 Fetching value of define "__AVX512BW__" : 1 00:03:41.068 Fetching value of define "__AVX512CD__" : 1 00:03:41.068 Fetching value of define "__AVX512DQ__" : 1 00:03:41.068 Fetching value of define "__AVX512F__" : 1 00:03:41.068 Fetching value of define "__AVX512VL__" : 1 00:03:41.068 Fetching value of define "__PCLMUL__" : 1 00:03:41.068 Fetching value of define "__RDRND__" : 1 00:03:41.068 Fetching value of define "__RDSEED__" : 1 00:03:41.068 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:41.068 Fetching value of define "__znver1__" : (undefined) 00:03:41.068 Fetching value of define "__znver2__" : (undefined) 00:03:41.068 Fetching value of define "__znver3__" : (undefined) 00:03:41.068 Fetching value of define "__znver4__" : (undefined) 00:03:41.068 Library asan found: YES 00:03:41.068 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:41.068 Message: lib/log: Defining dependency "log" 00:03:41.068 Message: lib/kvargs: Defining dependency "kvargs" 00:03:41.068 Message: lib/telemetry: Defining dependency "telemetry" 00:03:41.068 Library rt found: YES 00:03:41.068 Checking for function "getentropy" : NO 00:03:41.068 Message: lib/eal: Defining dependency "eal" 00:03:41.068 Message: lib/ring: Defining dependency "ring" 00:03:41.068 Message: lib/rcu: Defining dependency "rcu" 00:03:41.068 Message: lib/mempool: Defining dependency "mempool" 00:03:41.068 Message: lib/mbuf: Defining dependency "mbuf" 00:03:41.068 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:41.068 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:41.068 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:41.068 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:41.068 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:41.068 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:41.068 Compiler for C supports arguments -mpclmul: YES 00:03:41.068 Compiler for C supports arguments -maes: YES 00:03:41.068 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:41.068 Compiler for C supports arguments -mavx512bw: YES 00:03:41.068 Compiler for C supports arguments -mavx512dq: YES 00:03:41.068 Compiler for C supports arguments -mavx512vl: YES 00:03:41.068 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:41.068 Compiler for C supports arguments -mavx2: YES 00:03:41.068 Compiler for C supports arguments -mavx: YES 00:03:41.068 Message: lib/net: Defining dependency "net" 00:03:41.068 Message: lib/meter: Defining dependency "meter" 00:03:41.068 Message: lib/ethdev: Defining dependency "ethdev" 00:03:41.068 Message: lib/pci: Defining dependency "pci" 00:03:41.068 Message: lib/cmdline: Defining dependency "cmdline" 00:03:41.068 Message: lib/hash: Defining dependency "hash" 00:03:41.068 Message: lib/timer: Defining dependency "timer" 00:03:41.068 Message: lib/compressdev: Defining dependency "compressdev" 00:03:41.068 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:41.068 Message: lib/dmadev: Defining dependency "dmadev" 00:03:41.068 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:41.068 Message: lib/power: Defining dependency "power" 00:03:41.068 Message: lib/reorder: Defining dependency "reorder" 00:03:41.068 Message: lib/security: Defining dependency "security" 00:03:41.068 Has header "linux/userfaultfd.h" : YES 00:03:41.068 Has header "linux/vduse.h" : YES 00:03:41.068 Message: lib/vhost: Defining dependency "vhost" 00:03:41.068 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:41.068 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:41.068 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:41.068 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:41.068 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:41.068 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:41.068 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:41.068 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:41.068 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:41.068 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:41.068 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:41.068 Configuring doxy-api-html.conf using configuration 00:03:41.069 Configuring doxy-api-man.conf using configuration 00:03:41.069 Program mandb found: YES (/usr/bin/mandb) 00:03:41.069 Program sphinx-build found: NO 00:03:41.069 Configuring rte_build_config.h using configuration 00:03:41.069 Message: 00:03:41.069 ================= 00:03:41.069 Applications Enabled 00:03:41.069 ================= 00:03:41.069 00:03:41.069 apps: 00:03:41.069 00:03:41.069 00:03:41.069 Message: 00:03:41.069 ================= 00:03:41.069 Libraries Enabled 00:03:41.069 ================= 00:03:41.069 00:03:41.069 libs: 00:03:41.069 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:41.069 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:41.069 cryptodev, dmadev, power, reorder, security, vhost, 00:03:41.069 00:03:41.069 Message: 00:03:41.069 =============== 00:03:41.069 Drivers Enabled 00:03:41.069 =============== 00:03:41.069 00:03:41.069 common: 00:03:41.069 00:03:41.069 bus: 00:03:41.069 pci, vdev, 00:03:41.069 mempool: 00:03:41.069 ring, 00:03:41.069 dma: 00:03:41.069 00:03:41.069 net: 00:03:41.069 00:03:41.069 crypto: 00:03:41.069 00:03:41.069 compress: 00:03:41.069 00:03:41.069 vdpa: 00:03:41.069 00:03:41.069 00:03:41.069 Message: 00:03:41.069 ================= 00:03:41.069 Content Skipped 00:03:41.069 ================= 00:03:41.069 00:03:41.069 apps: 00:03:41.069 dumpcap: explicitly disabled via build config 00:03:41.069 graph: explicitly disabled via build config 00:03:41.069 pdump: explicitly disabled via build config 00:03:41.069 proc-info: explicitly disabled via build config 00:03:41.069 test-acl: explicitly disabled via build config 00:03:41.069 test-bbdev: explicitly disabled via build config 00:03:41.069 test-cmdline: explicitly disabled via build config 00:03:41.069 test-compress-perf: explicitly disabled via build config 00:03:41.069 test-crypto-perf: explicitly disabled via build config 00:03:41.069 test-dma-perf: explicitly disabled via build config 00:03:41.069 test-eventdev: explicitly disabled via build config 00:03:41.069 test-fib: explicitly disabled via build config 00:03:41.069 test-flow-perf: explicitly disabled via build config 00:03:41.069 test-gpudev: explicitly disabled via build config 00:03:41.069 test-mldev: explicitly disabled via build config 00:03:41.069 test-pipeline: explicitly disabled via build config 00:03:41.069 test-pmd: explicitly disabled via build config 00:03:41.069 test-regex: explicitly disabled via build config 00:03:41.069 test-sad: explicitly disabled via build config 00:03:41.069 test-security-perf: explicitly disabled via build config 00:03:41.069 00:03:41.069 libs: 00:03:41.069 argparse: explicitly disabled via build config 00:03:41.069 metrics: explicitly disabled via build config 00:03:41.069 acl: explicitly disabled via build config 00:03:41.069 bbdev: explicitly disabled via build config 00:03:41.069 bitratestats: explicitly disabled via build config 00:03:41.069 bpf: explicitly disabled via build config 00:03:41.069 cfgfile: explicitly disabled via build config 00:03:41.069 distributor: explicitly disabled via build config 00:03:41.069 efd: explicitly disabled via build config 00:03:41.069 eventdev: explicitly disabled via build config 00:03:41.069 dispatcher: explicitly disabled via build config 00:03:41.069 gpudev: explicitly disabled via build config 00:03:41.069 gro: explicitly disabled via build config 00:03:41.069 gso: explicitly disabled via build config 00:03:41.069 ip_frag: explicitly disabled via build config 00:03:41.069 jobstats: explicitly disabled via build config 00:03:41.069 latencystats: explicitly disabled via build config 00:03:41.069 lpm: explicitly disabled via build config 00:03:41.069 member: explicitly disabled via build config 00:03:41.069 pcapng: explicitly disabled via build config 00:03:41.069 rawdev: explicitly disabled via build config 00:03:41.069 regexdev: explicitly disabled via build config 00:03:41.069 mldev: explicitly disabled via build config 00:03:41.069 rib: explicitly disabled via build config 00:03:41.069 sched: explicitly disabled via build config 00:03:41.069 stack: explicitly disabled via build config 00:03:41.069 ipsec: explicitly disabled via build config 00:03:41.069 pdcp: explicitly disabled via build config 00:03:41.069 fib: explicitly disabled via build config 00:03:41.069 port: explicitly disabled via build config 00:03:41.069 pdump: explicitly disabled via build config 00:03:41.069 table: explicitly disabled via build config 00:03:41.069 pipeline: explicitly disabled via build config 00:03:41.069 graph: explicitly disabled via build config 00:03:41.069 node: explicitly disabled via build config 00:03:41.069 00:03:41.069 drivers: 00:03:41.069 common/cpt: not in enabled drivers build config 00:03:41.069 common/dpaax: not in enabled drivers build config 00:03:41.069 common/iavf: not in enabled drivers build config 00:03:41.069 common/idpf: not in enabled drivers build config 00:03:41.069 common/ionic: not in enabled drivers build config 00:03:41.069 common/mvep: not in enabled drivers build config 00:03:41.069 common/octeontx: not in enabled drivers build config 00:03:41.069 bus/auxiliary: not in enabled drivers build config 00:03:41.069 bus/cdx: not in enabled drivers build config 00:03:41.069 bus/dpaa: not in enabled drivers build config 00:03:41.069 bus/fslmc: not in enabled drivers build config 00:03:41.069 bus/ifpga: not in enabled drivers build config 00:03:41.069 bus/platform: not in enabled drivers build config 00:03:41.069 bus/uacce: not in enabled drivers build config 00:03:41.069 bus/vmbus: not in enabled drivers build config 00:03:41.069 common/cnxk: not in enabled drivers build config 00:03:41.069 common/mlx5: not in enabled drivers build config 00:03:41.069 common/nfp: not in enabled drivers build config 00:03:41.069 common/nitrox: not in enabled drivers build config 00:03:41.069 common/qat: not in enabled drivers build config 00:03:41.069 common/sfc_efx: not in enabled drivers build config 00:03:41.069 mempool/bucket: not in enabled drivers build config 00:03:41.069 mempool/cnxk: not in enabled drivers build config 00:03:41.069 mempool/dpaa: not in enabled drivers build config 00:03:41.069 mempool/dpaa2: not in enabled drivers build config 00:03:41.069 mempool/octeontx: not in enabled drivers build config 00:03:41.069 mempool/stack: not in enabled drivers build config 00:03:41.069 dma/cnxk: not in enabled drivers build config 00:03:41.069 dma/dpaa: not in enabled drivers build config 00:03:41.069 dma/dpaa2: not in enabled drivers build config 00:03:41.069 dma/hisilicon: not in enabled drivers build config 00:03:41.069 dma/idxd: not in enabled drivers build config 00:03:41.069 dma/ioat: not in enabled drivers build config 00:03:41.069 dma/skeleton: not in enabled drivers build config 00:03:41.069 net/af_packet: not in enabled drivers build config 00:03:41.069 net/af_xdp: not in enabled drivers build config 00:03:41.069 net/ark: not in enabled drivers build config 00:03:41.069 net/atlantic: not in enabled drivers build config 00:03:41.069 net/avp: not in enabled drivers build config 00:03:41.069 net/axgbe: not in enabled drivers build config 00:03:41.069 net/bnx2x: not in enabled drivers build config 00:03:41.069 net/bnxt: not in enabled drivers build config 00:03:41.069 net/bonding: not in enabled drivers build config 00:03:41.069 net/cnxk: not in enabled drivers build config 00:03:41.069 net/cpfl: not in enabled drivers build config 00:03:41.069 net/cxgbe: not in enabled drivers build config 00:03:41.069 net/dpaa: not in enabled drivers build config 00:03:41.069 net/dpaa2: not in enabled drivers build config 00:03:41.069 net/e1000: not in enabled drivers build config 00:03:41.069 net/ena: not in enabled drivers build config 00:03:41.069 net/enetc: not in enabled drivers build config 00:03:41.069 net/enetfec: not in enabled drivers build config 00:03:41.069 net/enic: not in enabled drivers build config 00:03:41.069 net/failsafe: not in enabled drivers build config 00:03:41.069 net/fm10k: not in enabled drivers build config 00:03:41.069 net/gve: not in enabled drivers build config 00:03:41.069 net/hinic: not in enabled drivers build config 00:03:41.069 net/hns3: not in enabled drivers build config 00:03:41.069 net/i40e: not in enabled drivers build config 00:03:41.069 net/iavf: not in enabled drivers build config 00:03:41.069 net/ice: not in enabled drivers build config 00:03:41.069 net/idpf: not in enabled drivers build config 00:03:41.069 net/igc: not in enabled drivers build config 00:03:41.069 net/ionic: not in enabled drivers build config 00:03:41.069 net/ipn3ke: not in enabled drivers build config 00:03:41.069 net/ixgbe: not in enabled drivers build config 00:03:41.069 net/mana: not in enabled drivers build config 00:03:41.069 net/memif: not in enabled drivers build config 00:03:41.069 net/mlx4: not in enabled drivers build config 00:03:41.069 net/mlx5: not in enabled drivers build config 00:03:41.069 net/mvneta: not in enabled drivers build config 00:03:41.069 net/mvpp2: not in enabled drivers build config 00:03:41.069 net/netvsc: not in enabled drivers build config 00:03:41.069 net/nfb: not in enabled drivers build config 00:03:41.069 net/nfp: not in enabled drivers build config 00:03:41.069 net/ngbe: not in enabled drivers build config 00:03:41.069 net/null: not in enabled drivers build config 00:03:41.069 net/octeontx: not in enabled drivers build config 00:03:41.069 net/octeon_ep: not in enabled drivers build config 00:03:41.069 net/pcap: not in enabled drivers build config 00:03:41.069 net/pfe: not in enabled drivers build config 00:03:41.069 net/qede: not in enabled drivers build config 00:03:41.069 net/ring: not in enabled drivers build config 00:03:41.069 net/sfc: not in enabled drivers build config 00:03:41.069 net/softnic: not in enabled drivers build config 00:03:41.069 net/tap: not in enabled drivers build config 00:03:41.069 net/thunderx: not in enabled drivers build config 00:03:41.069 net/txgbe: not in enabled drivers build config 00:03:41.069 net/vdev_netvsc: not in enabled drivers build config 00:03:41.069 net/vhost: not in enabled drivers build config 00:03:41.069 net/virtio: not in enabled drivers build config 00:03:41.069 net/vmxnet3: not in enabled drivers build config 00:03:41.069 raw/*: missing internal dependency, "rawdev" 00:03:41.069 crypto/armv8: not in enabled drivers build config 00:03:41.069 crypto/bcmfs: not in enabled drivers build config 00:03:41.069 crypto/caam_jr: not in enabled drivers build config 00:03:41.069 crypto/ccp: not in enabled drivers build config 00:03:41.069 crypto/cnxk: not in enabled drivers build config 00:03:41.069 crypto/dpaa_sec: not in enabled drivers build config 00:03:41.069 crypto/dpaa2_sec: not in enabled drivers build config 00:03:41.070 crypto/ipsec_mb: not in enabled drivers build config 00:03:41.070 crypto/mlx5: not in enabled drivers build config 00:03:41.070 crypto/mvsam: not in enabled drivers build config 00:03:41.070 crypto/nitrox: not in enabled drivers build config 00:03:41.070 crypto/null: not in enabled drivers build config 00:03:41.070 crypto/octeontx: not in enabled drivers build config 00:03:41.070 crypto/openssl: not in enabled drivers build config 00:03:41.070 crypto/scheduler: not in enabled drivers build config 00:03:41.070 crypto/uadk: not in enabled drivers build config 00:03:41.070 crypto/virtio: not in enabled drivers build config 00:03:41.070 compress/isal: not in enabled drivers build config 00:03:41.070 compress/mlx5: not in enabled drivers build config 00:03:41.070 compress/nitrox: not in enabled drivers build config 00:03:41.070 compress/octeontx: not in enabled drivers build config 00:03:41.070 compress/zlib: not in enabled drivers build config 00:03:41.070 regex/*: missing internal dependency, "regexdev" 00:03:41.070 ml/*: missing internal dependency, "mldev" 00:03:41.070 vdpa/ifc: not in enabled drivers build config 00:03:41.070 vdpa/mlx5: not in enabled drivers build config 00:03:41.070 vdpa/nfp: not in enabled drivers build config 00:03:41.070 vdpa/sfc: not in enabled drivers build config 00:03:41.070 event/*: missing internal dependency, "eventdev" 00:03:41.070 baseband/*: missing internal dependency, "bbdev" 00:03:41.070 gpu/*: missing internal dependency, "gpudev" 00:03:41.070 00:03:41.070 00:03:41.636 Build targets in project: 85 00:03:41.636 00:03:41.636 DPDK 24.03.0 00:03:41.636 00:03:41.636 User defined options 00:03:41.636 buildtype : debug 00:03:41.636 default_library : shared 00:03:41.636 libdir : lib 00:03:41.636 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:41.636 b_sanitize : address 00:03:41.636 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:41.636 c_link_args : 00:03:41.636 cpu_instruction_set: native 00:03:41.636 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:41.636 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:41.636 enable_docs : false 00:03:41.636 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:41.636 enable_kmods : false 00:03:41.636 max_lcores : 128 00:03:41.636 tests : false 00:03:41.636 00:03:41.636 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:42.202 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:42.202 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:42.461 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:42.461 [3/268] Linking static target lib/librte_kvargs.a 00:03:42.461 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:42.461 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:42.461 [6/268] Linking static target lib/librte_log.a 00:03:42.720 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:42.978 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:42.978 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.978 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:42.978 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:42.978 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:42.978 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:42.978 [14/268] Linking static target lib/librte_telemetry.a 00:03:42.978 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:43.236 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:43.236 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:43.236 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:43.494 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.752 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:43.752 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:43.752 [22/268] Linking target lib/librte_log.so.24.1 00:03:43.752 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:43.752 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:43.752 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:43.752 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:43.752 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:44.011 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:44.011 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:44.011 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.011 [31/268] Linking target lib/librte_kvargs.so.24.1 00:03:44.011 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:44.269 [33/268] Linking target lib/librte_telemetry.so.24.1 00:03:44.269 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:44.269 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:44.527 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:44.527 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:44.527 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:44.527 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:44.527 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:44.527 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:44.527 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:44.527 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:44.527 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:44.785 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:44.785 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:45.043 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:45.043 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:45.043 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:45.302 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:45.302 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:45.302 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:45.568 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:45.568 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:45.568 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:45.568 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:45.568 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:45.826 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:45.826 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:45.826 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:45.826 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:46.085 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:46.085 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:46.085 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:46.085 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:46.343 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:46.343 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:46.343 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:46.343 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:46.601 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:46.601 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:46.601 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:46.859 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:46.859 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:46.859 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:46.859 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:47.117 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:47.117 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:47.117 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:47.117 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:47.117 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:47.376 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:47.376 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:47.376 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:47.376 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:47.376 [86/268] Linking static target lib/librte_ring.a 00:03:47.376 [87/268] Linking static target lib/librte_eal.a 00:03:47.634 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:47.634 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:47.634 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:47.634 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:47.892 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:47.892 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:47.892 [94/268] Linking static target lib/librte_rcu.a 00:03:47.892 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:47.892 [96/268] Linking static target lib/librte_mempool.a 00:03:48.151 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.151 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:48.151 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:48.409 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:48.409 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:48.409 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.409 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:48.409 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:48.667 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:48.667 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:48.667 [107/268] Linking static target lib/librte_meter.a 00:03:48.667 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:48.667 [109/268] Linking static target lib/librte_net.a 00:03:48.926 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:48.926 [111/268] Linking static target lib/librte_mbuf.a 00:03:48.926 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:49.184 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:49.184 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.184 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:49.184 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.184 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:49.442 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.442 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:49.700 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:49.958 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:49.958 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:49.958 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.216 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:50.216 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:50.216 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:50.216 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:50.474 [128/268] Linking static target lib/librte_pci.a 00:03:50.474 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:50.474 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:50.474 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:50.474 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:50.732 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:50.732 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:50.732 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:50.732 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:50.732 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:50.732 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.732 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:50.732 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:50.732 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:50.732 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:51.024 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:51.024 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:51.024 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:51.024 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:51.283 [147/268] Linking static target lib/librte_cmdline.a 00:03:51.283 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:51.283 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:51.542 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:51.542 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:51.542 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:51.801 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:51.801 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:51.801 [155/268] Linking static target lib/librte_timer.a 00:03:52.060 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:52.060 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:52.060 [158/268] Linking static target lib/librte_compressdev.a 00:03:52.319 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:52.319 [160/268] Linking static target lib/librte_hash.a 00:03:52.319 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:52.319 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:52.578 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:52.578 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:52.578 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.578 [166/268] Linking static target lib/librte_dmadev.a 00:03:52.836 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:52.836 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:52.836 [169/268] Linking static target lib/librte_ethdev.a 00:03:52.836 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.095 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:53.095 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:53.095 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:53.095 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.353 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:53.353 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:53.612 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.612 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:53.612 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.612 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:53.612 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:53.612 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:53.612 [183/268] Linking static target lib/librte_cryptodev.a 00:03:53.870 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:54.128 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:54.128 [186/268] Linking static target lib/librte_power.a 00:03:54.128 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:54.387 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:54.387 [189/268] Linking static target lib/librte_reorder.a 00:03:54.387 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:54.387 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:54.387 [192/268] Linking static target lib/librte_security.a 00:03:54.387 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:54.954 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:54.954 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.212 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.212 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.470 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:55.470 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:55.470 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:55.727 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:55.727 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:55.985 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:55.985 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:55.985 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:55.985 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:56.244 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.244 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:56.244 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:56.244 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:56.244 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:56.502 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:56.502 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:56.502 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:56.502 [215/268] Linking static target drivers/librte_bus_pci.a 00:03:56.502 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:56.761 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:56.761 [218/268] Linking static target drivers/librte_bus_vdev.a 00:03:56.761 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:56.761 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:56.761 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:57.019 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:57.019 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.019 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:57.019 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:57.019 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:57.277 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.652 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:59.221 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.221 [230/268] Linking target lib/librte_eal.so.24.1 00:03:59.479 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:59.479 [232/268] Linking target lib/librte_meter.so.24.1 00:03:59.479 [233/268] Linking target lib/librte_pci.so.24.1 00:03:59.479 [234/268] Linking target lib/librte_ring.so.24.1 00:03:59.479 [235/268] Linking target lib/librte_timer.so.24.1 00:03:59.479 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:59.479 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:59.738 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:59.738 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:59.738 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:59.738 [241/268] Linking target lib/librte_mempool.so.24.1 00:03:59.738 [242/268] Linking target lib/librte_rcu.so.24.1 00:03:59.738 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:59.738 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:59.738 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:59.738 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:59.738 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:59.738 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:59.738 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:59.997 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:59.997 [251/268] Linking target lib/librte_reorder.so.24.1 00:03:59.997 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:03:59.997 [253/268] Linking target lib/librte_net.so.24.1 00:03:59.997 [254/268] Linking target lib/librte_compressdev.so.24.1 00:04:00.255 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:00.255 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:00.255 [257/268] Linking target lib/librte_hash.so.24.1 00:04:00.255 [258/268] Linking target lib/librte_security.so.24.1 00:04:00.255 [259/268] Linking target lib/librte_cmdline.so.24.1 00:04:00.515 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:02.420 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.420 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:02.420 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:02.420 [264/268] Linking target lib/librte_power.so.24.1 00:04:02.679 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:02.938 [266/268] Linking static target lib/librte_vhost.a 00:04:05.475 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.475 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:05.475 INFO: autodetecting backend as ninja 00:04:05.475 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:27.448 CC lib/log/log.o 00:04:27.448 CC lib/log/log_flags.o 00:04:27.448 CC lib/log/log_deprecated.o 00:04:27.448 CC lib/ut/ut.o 00:04:27.448 CC lib/ut_mock/mock.o 00:04:27.448 LIB libspdk_ut.a 00:04:27.448 LIB libspdk_log.a 00:04:27.448 LIB libspdk_ut_mock.a 00:04:27.448 SO libspdk_ut.so.2.0 00:04:27.448 SO libspdk_log.so.7.1 00:04:27.448 SO libspdk_ut_mock.so.6.0 00:04:27.448 SYMLINK libspdk_ut.so 00:04:27.448 SYMLINK libspdk_ut_mock.so 00:04:27.448 SYMLINK libspdk_log.so 00:04:27.448 CXX lib/trace_parser/trace.o 00:04:27.448 CC lib/ioat/ioat.o 00:04:27.448 CC lib/dma/dma.o 00:04:27.448 CC lib/util/cpuset.o 00:04:27.448 CC lib/util/base64.o 00:04:27.448 CC lib/util/bit_array.o 00:04:27.448 CC lib/util/crc32.o 00:04:27.448 CC lib/util/crc16.o 00:04:27.448 CC lib/util/crc32c.o 00:04:27.448 CC lib/vfio_user/host/vfio_user_pci.o 00:04:27.448 CC lib/util/crc32_ieee.o 00:04:27.448 CC lib/util/crc64.o 00:04:27.448 CC lib/util/dif.o 00:04:27.448 CC lib/vfio_user/host/vfio_user.o 00:04:27.448 LIB libspdk_dma.a 00:04:27.448 CC lib/util/fd.o 00:04:27.448 SO libspdk_dma.so.5.0 00:04:27.448 CC lib/util/fd_group.o 00:04:27.448 CC lib/util/file.o 00:04:27.448 SYMLINK libspdk_dma.so 00:04:27.448 CC lib/util/hexlify.o 00:04:27.448 LIB libspdk_ioat.a 00:04:27.448 CC lib/util/iov.o 00:04:27.448 SO libspdk_ioat.so.7.0 00:04:27.448 CC lib/util/math.o 00:04:27.448 CC lib/util/net.o 00:04:27.448 LIB libspdk_vfio_user.a 00:04:27.448 SYMLINK libspdk_ioat.so 00:04:27.448 CC lib/util/pipe.o 00:04:27.448 CC lib/util/strerror_tls.o 00:04:27.448 SO libspdk_vfio_user.so.5.0 00:04:27.448 CC lib/util/string.o 00:04:27.448 SYMLINK libspdk_vfio_user.so 00:04:27.448 CC lib/util/uuid.o 00:04:27.448 CC lib/util/xor.o 00:04:27.448 CC lib/util/zipf.o 00:04:27.448 CC lib/util/md5.o 00:04:27.448 LIB libspdk_util.a 00:04:27.448 SO libspdk_util.so.10.0 00:04:27.448 LIB libspdk_trace_parser.a 00:04:27.448 SO libspdk_trace_parser.so.6.0 00:04:27.448 SYMLINK libspdk_util.so 00:04:27.448 SYMLINK libspdk_trace_parser.so 00:04:27.448 CC lib/json/json_parse.o 00:04:27.448 CC lib/json/json_util.o 00:04:27.448 CC lib/conf/conf.o 00:04:27.448 CC lib/rdma_provider/common.o 00:04:27.448 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:27.448 CC lib/json/json_write.o 00:04:27.448 CC lib/env_dpdk/env.o 00:04:27.448 CC lib/idxd/idxd.o 00:04:27.448 CC lib/rdma_utils/rdma_utils.o 00:04:27.448 CC lib/vmd/vmd.o 00:04:27.448 CC lib/env_dpdk/memory.o 00:04:27.448 LIB libspdk_rdma_provider.a 00:04:27.448 LIB libspdk_conf.a 00:04:27.448 SO libspdk_rdma_provider.so.6.0 00:04:27.448 CC lib/env_dpdk/pci.o 00:04:27.448 SO libspdk_conf.so.6.0 00:04:27.448 CC lib/idxd/idxd_user.o 00:04:27.448 SYMLINK libspdk_rdma_provider.so 00:04:27.448 LIB libspdk_json.a 00:04:27.448 CC lib/env_dpdk/init.o 00:04:27.448 SYMLINK libspdk_conf.so 00:04:27.448 CC lib/env_dpdk/threads.o 00:04:27.448 LIB libspdk_rdma_utils.a 00:04:27.449 SO libspdk_json.so.6.0 00:04:27.449 SO libspdk_rdma_utils.so.1.0 00:04:27.449 SYMLINK libspdk_rdma_utils.so 00:04:27.449 SYMLINK libspdk_json.so 00:04:27.449 CC lib/env_dpdk/pci_ioat.o 00:04:27.449 CC lib/vmd/led.o 00:04:27.449 CC lib/env_dpdk/pci_virtio.o 00:04:27.449 CC lib/idxd/idxd_kernel.o 00:04:27.449 CC lib/env_dpdk/pci_vmd.o 00:04:27.708 CC lib/env_dpdk/pci_idxd.o 00:04:27.708 CC lib/env_dpdk/pci_event.o 00:04:27.708 CC lib/jsonrpc/jsonrpc_server.o 00:04:27.708 CC lib/env_dpdk/sigbus_handler.o 00:04:27.708 CC lib/env_dpdk/pci_dpdk.o 00:04:27.708 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:27.708 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:27.708 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:27.708 LIB libspdk_idxd.a 00:04:27.708 CC lib/jsonrpc/jsonrpc_client.o 00:04:27.968 LIB libspdk_vmd.a 00:04:27.968 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:27.968 SO libspdk_idxd.so.12.1 00:04:27.968 SO libspdk_vmd.so.6.0 00:04:27.968 SYMLINK libspdk_idxd.so 00:04:27.968 SYMLINK libspdk_vmd.so 00:04:28.228 LIB libspdk_jsonrpc.a 00:04:28.228 SO libspdk_jsonrpc.so.6.0 00:04:28.228 SYMLINK libspdk_jsonrpc.so 00:04:28.796 CC lib/rpc/rpc.o 00:04:28.796 LIB libspdk_env_dpdk.a 00:04:28.796 SO libspdk_env_dpdk.so.15.1 00:04:28.796 LIB libspdk_rpc.a 00:04:29.055 SO libspdk_rpc.so.6.0 00:04:29.055 SYMLINK libspdk_rpc.so 00:04:29.055 SYMLINK libspdk_env_dpdk.so 00:04:29.315 CC lib/keyring/keyring_rpc.o 00:04:29.315 CC lib/keyring/keyring.o 00:04:29.315 CC lib/notify/notify.o 00:04:29.315 CC lib/notify/notify_rpc.o 00:04:29.315 CC lib/trace/trace.o 00:04:29.315 CC lib/trace/trace_flags.o 00:04:29.315 CC lib/trace/trace_rpc.o 00:04:29.574 LIB libspdk_notify.a 00:04:29.574 SO libspdk_notify.so.6.0 00:04:29.574 LIB libspdk_keyring.a 00:04:29.574 SYMLINK libspdk_notify.so 00:04:29.574 SO libspdk_keyring.so.2.0 00:04:29.574 LIB libspdk_trace.a 00:04:29.574 SYMLINK libspdk_keyring.so 00:04:29.833 SO libspdk_trace.so.11.0 00:04:29.833 SYMLINK libspdk_trace.so 00:04:30.093 CC lib/sock/sock.o 00:04:30.093 CC lib/sock/sock_rpc.o 00:04:30.093 CC lib/thread/thread.o 00:04:30.093 CC lib/thread/iobuf.o 00:04:30.661 LIB libspdk_sock.a 00:04:30.661 SO libspdk_sock.so.10.0 00:04:30.920 SYMLINK libspdk_sock.so 00:04:31.178 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:31.178 CC lib/nvme/nvme_ctrlr.o 00:04:31.178 CC lib/nvme/nvme_fabric.o 00:04:31.178 CC lib/nvme/nvme_ns_cmd.o 00:04:31.178 CC lib/nvme/nvme_ns.o 00:04:31.178 CC lib/nvme/nvme_pcie.o 00:04:31.178 CC lib/nvme/nvme_pcie_common.o 00:04:31.178 CC lib/nvme/nvme_qpair.o 00:04:31.178 CC lib/nvme/nvme.o 00:04:32.116 CC lib/nvme/nvme_quirks.o 00:04:32.116 CC lib/nvme/nvme_transport.o 00:04:32.116 LIB libspdk_thread.a 00:04:32.116 SO libspdk_thread.so.11.0 00:04:32.116 CC lib/nvme/nvme_discovery.o 00:04:32.116 SYMLINK libspdk_thread.so 00:04:32.116 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:32.116 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:32.116 CC lib/nvme/nvme_tcp.o 00:04:32.418 CC lib/nvme/nvme_opal.o 00:04:32.418 CC lib/nvme/nvme_io_msg.o 00:04:32.418 CC lib/nvme/nvme_poll_group.o 00:04:32.676 CC lib/accel/accel.o 00:04:32.676 CC lib/accel/accel_rpc.o 00:04:32.676 CC lib/blob/blobstore.o 00:04:32.935 CC lib/init/json_config.o 00:04:32.935 CC lib/virtio/virtio.o 00:04:32.935 CC lib/virtio/virtio_vhost_user.o 00:04:32.935 CC lib/blob/request.o 00:04:33.195 CC lib/init/subsystem.o 00:04:33.195 CC lib/fsdev/fsdev.o 00:04:33.195 CC lib/fsdev/fsdev_io.o 00:04:33.195 CC lib/fsdev/fsdev_rpc.o 00:04:33.195 CC lib/init/subsystem_rpc.o 00:04:33.195 CC lib/virtio/virtio_vfio_user.o 00:04:33.453 CC lib/init/rpc.o 00:04:33.453 CC lib/blob/zeroes.o 00:04:33.453 CC lib/blob/blob_bs_dev.o 00:04:33.453 LIB libspdk_init.a 00:04:33.453 CC lib/virtio/virtio_pci.o 00:04:33.453 CC lib/nvme/nvme_zns.o 00:04:33.453 SO libspdk_init.so.6.0 00:04:33.453 CC lib/accel/accel_sw.o 00:04:33.712 SYMLINK libspdk_init.so 00:04:33.712 CC lib/nvme/nvme_stubs.o 00:04:33.712 CC lib/nvme/nvme_auth.o 00:04:33.973 LIB libspdk_virtio.a 00:04:33.973 LIB libspdk_fsdev.a 00:04:33.973 SO libspdk_virtio.so.7.0 00:04:33.973 SO libspdk_fsdev.so.2.0 00:04:33.973 CC lib/event/app.o 00:04:33.973 LIB libspdk_accel.a 00:04:33.973 CC lib/nvme/nvme_cuse.o 00:04:33.973 SYMLINK libspdk_virtio.so 00:04:33.973 CC lib/nvme/nvme_rdma.o 00:04:33.973 SYMLINK libspdk_fsdev.so 00:04:33.973 SO libspdk_accel.so.16.0 00:04:33.973 CC lib/event/reactor.o 00:04:33.973 SYMLINK libspdk_accel.so 00:04:33.973 CC lib/event/log_rpc.o 00:04:34.233 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:34.233 CC lib/event/app_rpc.o 00:04:34.233 CC lib/event/scheduler_static.o 00:04:34.493 CC lib/bdev/bdev.o 00:04:34.493 CC lib/bdev/bdev_rpc.o 00:04:34.493 CC lib/bdev/bdev_zone.o 00:04:34.493 CC lib/bdev/part.o 00:04:34.493 LIB libspdk_event.a 00:04:34.493 SO libspdk_event.so.14.0 00:04:34.493 CC lib/bdev/scsi_nvme.o 00:04:34.752 SYMLINK libspdk_event.so 00:04:34.752 LIB libspdk_fuse_dispatcher.a 00:04:35.011 SO libspdk_fuse_dispatcher.so.1.0 00:04:35.011 SYMLINK libspdk_fuse_dispatcher.so 00:04:35.578 LIB libspdk_nvme.a 00:04:35.578 SO libspdk_nvme.so.14.1 00:04:36.146 SYMLINK libspdk_nvme.so 00:04:36.770 LIB libspdk_blob.a 00:04:36.770 SO libspdk_blob.so.11.0 00:04:36.770 SYMLINK libspdk_blob.so 00:04:37.339 CC lib/blobfs/blobfs.o 00:04:37.339 CC lib/blobfs/tree.o 00:04:37.339 CC lib/lvol/lvol.o 00:04:37.598 LIB libspdk_bdev.a 00:04:37.598 SO libspdk_bdev.so.17.0 00:04:37.858 SYMLINK libspdk_bdev.so 00:04:38.117 CC lib/scsi/lun.o 00:04:38.117 CC lib/scsi/scsi.o 00:04:38.117 CC lib/scsi/port.o 00:04:38.117 CC lib/scsi/dev.o 00:04:38.117 CC lib/nvmf/ctrlr.o 00:04:38.117 CC lib/nbd/nbd.o 00:04:38.117 CC lib/ublk/ublk.o 00:04:38.117 CC lib/ftl/ftl_core.o 00:04:38.117 LIB libspdk_blobfs.a 00:04:38.117 CC lib/nbd/nbd_rpc.o 00:04:38.117 CC lib/scsi/scsi_bdev.o 00:04:38.117 SO libspdk_blobfs.so.10.0 00:04:38.376 LIB libspdk_lvol.a 00:04:38.376 SYMLINK libspdk_blobfs.so 00:04:38.376 CC lib/nvmf/ctrlr_discovery.o 00:04:38.376 CC lib/nvmf/ctrlr_bdev.o 00:04:38.376 SO libspdk_lvol.so.10.0 00:04:38.376 CC lib/nvmf/subsystem.o 00:04:38.376 CC lib/nvmf/nvmf.o 00:04:38.376 SYMLINK libspdk_lvol.so 00:04:38.376 CC lib/nvmf/nvmf_rpc.o 00:04:38.376 CC lib/ftl/ftl_init.o 00:04:38.376 LIB libspdk_nbd.a 00:04:38.635 SO libspdk_nbd.so.7.0 00:04:38.635 SYMLINK libspdk_nbd.so 00:04:38.635 CC lib/ublk/ublk_rpc.o 00:04:38.635 CC lib/ftl/ftl_layout.o 00:04:38.635 CC lib/ftl/ftl_debug.o 00:04:38.894 LIB libspdk_ublk.a 00:04:38.894 CC lib/scsi/scsi_pr.o 00:04:38.894 SO libspdk_ublk.so.3.0 00:04:38.894 CC lib/nvmf/transport.o 00:04:38.894 SYMLINK libspdk_ublk.so 00:04:38.894 CC lib/nvmf/tcp.o 00:04:39.153 CC lib/ftl/ftl_io.o 00:04:39.153 CC lib/ftl/ftl_sb.o 00:04:39.153 CC lib/ftl/ftl_l2p.o 00:04:39.153 CC lib/scsi/scsi_rpc.o 00:04:39.412 CC lib/ftl/ftl_l2p_flat.o 00:04:39.412 CC lib/ftl/ftl_nv_cache.o 00:04:39.412 CC lib/nvmf/stubs.o 00:04:39.412 CC lib/scsi/task.o 00:04:39.412 CC lib/ftl/ftl_band.o 00:04:39.412 CC lib/ftl/ftl_band_ops.o 00:04:39.412 CC lib/nvmf/mdns_server.o 00:04:39.670 CC lib/nvmf/rdma.o 00:04:39.670 LIB libspdk_scsi.a 00:04:39.670 SO libspdk_scsi.so.9.0 00:04:39.929 SYMLINK libspdk_scsi.so 00:04:39.929 CC lib/ftl/ftl_writer.o 00:04:39.929 CC lib/ftl/ftl_rq.o 00:04:39.929 CC lib/nvmf/auth.o 00:04:39.929 CC lib/ftl/ftl_reloc.o 00:04:39.929 CC lib/ftl/ftl_l2p_cache.o 00:04:39.929 CC lib/iscsi/conn.o 00:04:40.188 CC lib/iscsi/init_grp.o 00:04:40.188 CC lib/vhost/vhost.o 00:04:40.447 CC lib/iscsi/iscsi.o 00:04:40.447 CC lib/vhost/vhost_rpc.o 00:04:40.447 CC lib/iscsi/param.o 00:04:40.447 CC lib/iscsi/portal_grp.o 00:04:40.705 CC lib/ftl/ftl_p2l.o 00:04:40.705 CC lib/iscsi/tgt_node.o 00:04:40.705 CC lib/ftl/ftl_p2l_log.o 00:04:40.705 CC lib/iscsi/iscsi_subsystem.o 00:04:40.705 CC lib/iscsi/iscsi_rpc.o 00:04:40.705 CC lib/iscsi/task.o 00:04:40.963 CC lib/vhost/vhost_scsi.o 00:04:40.963 CC lib/vhost/vhost_blk.o 00:04:41.221 CC lib/vhost/rte_vhost_user.o 00:04:41.221 CC lib/ftl/mngt/ftl_mngt.o 00:04:41.221 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:41.221 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:41.221 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:41.221 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:41.479 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:41.479 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:41.479 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:41.479 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:41.737 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:41.737 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:41.737 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:41.995 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:41.995 CC lib/ftl/utils/ftl_conf.o 00:04:41.995 CC lib/ftl/utils/ftl_md.o 00:04:41.995 CC lib/ftl/utils/ftl_mempool.o 00:04:41.995 CC lib/ftl/utils/ftl_bitmap.o 00:04:41.995 CC lib/ftl/utils/ftl_property.o 00:04:42.252 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:42.253 LIB libspdk_iscsi.a 00:04:42.253 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:42.253 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:42.253 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:42.253 SO libspdk_iscsi.so.8.0 00:04:42.253 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:42.253 LIB libspdk_vhost.a 00:04:42.253 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:42.511 SO libspdk_vhost.so.8.0 00:04:42.511 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:42.511 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:42.511 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:42.511 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:42.511 SYMLINK libspdk_iscsi.so 00:04:42.511 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:42.511 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:42.511 LIB libspdk_nvmf.a 00:04:42.511 SYMLINK libspdk_vhost.so 00:04:42.511 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:42.511 CC lib/ftl/base/ftl_base_dev.o 00:04:42.511 CC lib/ftl/base/ftl_base_bdev.o 00:04:42.769 SO libspdk_nvmf.so.20.0 00:04:42.769 CC lib/ftl/ftl_trace.o 00:04:43.028 SYMLINK libspdk_nvmf.so 00:04:43.028 LIB libspdk_ftl.a 00:04:43.287 SO libspdk_ftl.so.9.0 00:04:43.546 SYMLINK libspdk_ftl.so 00:04:43.804 CC module/env_dpdk/env_dpdk_rpc.o 00:04:44.063 CC module/keyring/linux/keyring.o 00:04:44.063 CC module/blob/bdev/blob_bdev.o 00:04:44.063 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:44.063 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:44.063 CC module/keyring/file/keyring.o 00:04:44.063 CC module/accel/error/accel_error.o 00:04:44.063 CC module/sock/posix/posix.o 00:04:44.063 CC module/scheduler/gscheduler/gscheduler.o 00:04:44.063 CC module/fsdev/aio/fsdev_aio.o 00:04:44.063 LIB libspdk_env_dpdk_rpc.a 00:04:44.063 SO libspdk_env_dpdk_rpc.so.6.0 00:04:44.063 SYMLINK libspdk_env_dpdk_rpc.so 00:04:44.063 CC module/keyring/file/keyring_rpc.o 00:04:44.063 CC module/keyring/linux/keyring_rpc.o 00:04:44.063 LIB libspdk_scheduler_dpdk_governor.a 00:04:44.063 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:44.063 LIB libspdk_scheduler_gscheduler.a 00:04:44.063 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:44.321 SO libspdk_scheduler_gscheduler.so.4.0 00:04:44.321 CC module/accel/error/accel_error_rpc.o 00:04:44.321 LIB libspdk_scheduler_dynamic.a 00:04:44.321 SO libspdk_scheduler_dynamic.so.4.0 00:04:44.321 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:44.321 SYMLINK libspdk_scheduler_gscheduler.so 00:04:44.321 CC module/fsdev/aio/linux_aio_mgr.o 00:04:44.321 LIB libspdk_keyring_file.a 00:04:44.321 SYMLINK libspdk_scheduler_dynamic.so 00:04:44.321 LIB libspdk_keyring_linux.a 00:04:44.321 LIB libspdk_blob_bdev.a 00:04:44.321 SO libspdk_keyring_file.so.2.0 00:04:44.321 SO libspdk_keyring_linux.so.1.0 00:04:44.321 SO libspdk_blob_bdev.so.11.0 00:04:44.321 LIB libspdk_accel_error.a 00:04:44.321 SYMLINK libspdk_keyring_file.so 00:04:44.321 SYMLINK libspdk_keyring_linux.so 00:04:44.321 SO libspdk_accel_error.so.2.0 00:04:44.321 SYMLINK libspdk_blob_bdev.so 00:04:44.580 SYMLINK libspdk_accel_error.so 00:04:44.580 CC module/accel/ioat/accel_ioat.o 00:04:44.580 CC module/accel/ioat/accel_ioat_rpc.o 00:04:44.580 CC module/accel/dsa/accel_dsa.o 00:04:44.580 CC module/accel/dsa/accel_dsa_rpc.o 00:04:44.580 CC module/accel/iaa/accel_iaa.o 00:04:44.580 CC module/accel/iaa/accel_iaa_rpc.o 00:04:44.580 CC module/bdev/delay/vbdev_delay.o 00:04:44.580 LIB libspdk_accel_ioat.a 00:04:44.580 CC module/bdev/error/vbdev_error.o 00:04:44.840 CC module/blobfs/bdev/blobfs_bdev.o 00:04:44.840 SO libspdk_accel_ioat.so.6.0 00:04:44.840 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:44.840 LIB libspdk_accel_iaa.a 00:04:44.840 SYMLINK libspdk_accel_ioat.so 00:04:44.840 CC module/bdev/error/vbdev_error_rpc.o 00:04:44.840 CC module/bdev/gpt/gpt.o 00:04:44.840 SO libspdk_accel_iaa.so.3.0 00:04:44.840 LIB libspdk_accel_dsa.a 00:04:44.840 SO libspdk_accel_dsa.so.5.0 00:04:44.840 LIB libspdk_fsdev_aio.a 00:04:44.840 SYMLINK libspdk_accel_iaa.so 00:04:44.840 CC module/bdev/gpt/vbdev_gpt.o 00:04:44.840 SO libspdk_fsdev_aio.so.1.0 00:04:44.840 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:44.840 LIB libspdk_sock_posix.a 00:04:44.840 SYMLINK libspdk_accel_dsa.so 00:04:44.840 SO libspdk_sock_posix.so.6.0 00:04:45.100 LIB libspdk_blobfs_bdev.a 00:04:45.100 SYMLINK libspdk_fsdev_aio.so 00:04:45.100 LIB libspdk_bdev_error.a 00:04:45.100 SO libspdk_blobfs_bdev.so.6.0 00:04:45.100 SO libspdk_bdev_error.so.6.0 00:04:45.100 SYMLINK libspdk_sock_posix.so 00:04:45.100 SYMLINK libspdk_blobfs_bdev.so 00:04:45.100 CC module/bdev/lvol/vbdev_lvol.o 00:04:45.100 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:45.100 SYMLINK libspdk_bdev_error.so 00:04:45.100 LIB libspdk_bdev_delay.a 00:04:45.100 CC module/bdev/null/bdev_null.o 00:04:45.100 CC module/bdev/malloc/bdev_malloc.o 00:04:45.100 SO libspdk_bdev_delay.so.6.0 00:04:45.100 CC module/bdev/nvme/bdev_nvme.o 00:04:45.100 LIB libspdk_bdev_gpt.a 00:04:45.360 CC module/bdev/passthru/vbdev_passthru.o 00:04:45.360 SO libspdk_bdev_gpt.so.6.0 00:04:45.360 SYMLINK libspdk_bdev_delay.so 00:04:45.360 CC module/bdev/raid/bdev_raid.o 00:04:45.360 SYMLINK libspdk_bdev_gpt.so 00:04:45.360 CC module/bdev/raid/bdev_raid_rpc.o 00:04:45.360 CC module/bdev/split/vbdev_split.o 00:04:45.360 CC module/bdev/null/bdev_null_rpc.o 00:04:45.360 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:45.619 CC module/bdev/raid/bdev_raid_sb.o 00:04:45.619 CC module/bdev/split/vbdev_split_rpc.o 00:04:45.619 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:45.619 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:45.619 LIB libspdk_bdev_null.a 00:04:45.619 SO libspdk_bdev_null.so.6.0 00:04:45.619 LIB libspdk_bdev_lvol.a 00:04:45.878 CC module/bdev/aio/bdev_aio.o 00:04:45.878 SO libspdk_bdev_lvol.so.6.0 00:04:45.878 SYMLINK libspdk_bdev_null.so 00:04:45.878 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:45.878 LIB libspdk_bdev_malloc.a 00:04:45.878 LIB libspdk_bdev_split.a 00:04:45.878 LIB libspdk_bdev_passthru.a 00:04:45.878 SO libspdk_bdev_split.so.6.0 00:04:45.878 SO libspdk_bdev_malloc.so.6.0 00:04:45.878 SO libspdk_bdev_passthru.so.6.0 00:04:45.878 SYMLINK libspdk_bdev_lvol.so 00:04:45.878 CC module/bdev/raid/raid0.o 00:04:45.878 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:45.878 CC module/bdev/nvme/nvme_rpc.o 00:04:45.878 SYMLINK libspdk_bdev_split.so 00:04:45.878 CC module/bdev/raid/raid1.o 00:04:45.878 SYMLINK libspdk_bdev_malloc.so 00:04:45.878 CC module/bdev/raid/concat.o 00:04:45.878 SYMLINK libspdk_bdev_passthru.so 00:04:45.878 CC module/bdev/raid/raid5f.o 00:04:46.136 LIB libspdk_bdev_zone_block.a 00:04:46.136 SO libspdk_bdev_zone_block.so.6.0 00:04:46.136 SYMLINK libspdk_bdev_zone_block.so 00:04:46.136 CC module/bdev/aio/bdev_aio_rpc.o 00:04:46.136 CC module/bdev/nvme/bdev_mdns_client.o 00:04:46.136 CC module/bdev/nvme/vbdev_opal.o 00:04:46.136 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:46.136 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:46.396 LIB libspdk_bdev_aio.a 00:04:46.396 CC module/bdev/ftl/bdev_ftl.o 00:04:46.396 SO libspdk_bdev_aio.so.6.0 00:04:46.396 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:46.396 SYMLINK libspdk_bdev_aio.so 00:04:46.396 CC module/bdev/iscsi/bdev_iscsi.o 00:04:46.396 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:46.655 LIB libspdk_bdev_raid.a 00:04:46.655 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:46.655 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:46.655 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:46.655 SO libspdk_bdev_raid.so.6.0 00:04:46.655 SYMLINK libspdk_bdev_raid.so 00:04:46.655 LIB libspdk_bdev_ftl.a 00:04:46.913 SO libspdk_bdev_ftl.so.6.0 00:04:46.913 SYMLINK libspdk_bdev_ftl.so 00:04:46.913 LIB libspdk_bdev_iscsi.a 00:04:46.913 SO libspdk_bdev_iscsi.so.6.0 00:04:47.172 SYMLINK libspdk_bdev_iscsi.so 00:04:47.431 LIB libspdk_bdev_virtio.a 00:04:47.431 SO libspdk_bdev_virtio.so.6.0 00:04:47.431 SYMLINK libspdk_bdev_virtio.so 00:04:48.873 LIB libspdk_bdev_nvme.a 00:04:48.873 SO libspdk_bdev_nvme.so.7.1 00:04:48.873 SYMLINK libspdk_bdev_nvme.so 00:04:49.455 CC module/event/subsystems/fsdev/fsdev.o 00:04:49.455 CC module/event/subsystems/vmd/vmd.o 00:04:49.455 CC module/event/subsystems/sock/sock.o 00:04:49.455 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:49.455 CC module/event/subsystems/keyring/keyring.o 00:04:49.455 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:49.455 CC module/event/subsystems/scheduler/scheduler.o 00:04:49.455 CC module/event/subsystems/iobuf/iobuf.o 00:04:49.455 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:49.714 LIB libspdk_event_fsdev.a 00:04:49.714 LIB libspdk_event_vhost_blk.a 00:04:49.714 LIB libspdk_event_keyring.a 00:04:49.714 LIB libspdk_event_vmd.a 00:04:49.714 LIB libspdk_event_sock.a 00:04:49.714 LIB libspdk_event_scheduler.a 00:04:49.714 SO libspdk_event_vhost_blk.so.3.0 00:04:49.714 SO libspdk_event_keyring.so.1.0 00:04:49.714 SO libspdk_event_fsdev.so.1.0 00:04:49.714 SO libspdk_event_vmd.so.6.0 00:04:49.714 SO libspdk_event_sock.so.5.0 00:04:49.714 SO libspdk_event_scheduler.so.4.0 00:04:49.714 LIB libspdk_event_iobuf.a 00:04:49.714 SYMLINK libspdk_event_keyring.so 00:04:49.714 SYMLINK libspdk_event_fsdev.so 00:04:49.714 SYMLINK libspdk_event_vhost_blk.so 00:04:49.714 SYMLINK libspdk_event_vmd.so 00:04:49.714 SYMLINK libspdk_event_sock.so 00:04:49.714 SO libspdk_event_iobuf.so.3.0 00:04:49.714 SYMLINK libspdk_event_scheduler.so 00:04:49.973 SYMLINK libspdk_event_iobuf.so 00:04:50.232 CC module/event/subsystems/accel/accel.o 00:04:50.490 LIB libspdk_event_accel.a 00:04:50.490 SO libspdk_event_accel.so.6.0 00:04:50.490 SYMLINK libspdk_event_accel.so 00:04:51.057 CC module/event/subsystems/bdev/bdev.o 00:04:51.315 LIB libspdk_event_bdev.a 00:04:51.315 SO libspdk_event_bdev.so.6.0 00:04:51.315 SYMLINK libspdk_event_bdev.so 00:04:51.573 CC module/event/subsystems/ublk/ublk.o 00:04:51.573 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:51.573 CC module/event/subsystems/scsi/scsi.o 00:04:51.573 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:51.833 CC module/event/subsystems/nbd/nbd.o 00:04:51.833 LIB libspdk_event_ublk.a 00:04:51.833 LIB libspdk_event_scsi.a 00:04:51.833 SO libspdk_event_ublk.so.3.0 00:04:51.833 SO libspdk_event_scsi.so.6.0 00:04:51.833 LIB libspdk_event_nbd.a 00:04:51.833 SYMLINK libspdk_event_ublk.so 00:04:51.833 SO libspdk_event_nbd.so.6.0 00:04:52.091 SYMLINK libspdk_event_scsi.so 00:04:52.091 LIB libspdk_event_nvmf.a 00:04:52.091 SYMLINK libspdk_event_nbd.so 00:04:52.091 SO libspdk_event_nvmf.so.6.0 00:04:52.091 SYMLINK libspdk_event_nvmf.so 00:04:52.349 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:52.349 CC module/event/subsystems/iscsi/iscsi.o 00:04:52.608 LIB libspdk_event_vhost_scsi.a 00:04:52.608 LIB libspdk_event_iscsi.a 00:04:52.608 SO libspdk_event_vhost_scsi.so.3.0 00:04:52.608 SO libspdk_event_iscsi.so.6.0 00:04:52.608 SYMLINK libspdk_event_vhost_scsi.so 00:04:52.608 SYMLINK libspdk_event_iscsi.so 00:04:52.866 SO libspdk.so.6.0 00:04:52.866 SYMLINK libspdk.so 00:04:53.125 CC app/trace_record/trace_record.o 00:04:53.125 CC app/spdk_lspci/spdk_lspci.o 00:04:53.125 CXX app/trace/trace.o 00:04:53.125 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:53.125 CC app/nvmf_tgt/nvmf_main.o 00:04:53.125 CC app/iscsi_tgt/iscsi_tgt.o 00:04:53.125 CC examples/ioat/perf/perf.o 00:04:53.125 CC app/spdk_tgt/spdk_tgt.o 00:04:53.125 CC test/thread/poller_perf/poller_perf.o 00:04:53.125 CC examples/util/zipf/zipf.o 00:04:53.383 LINK spdk_lspci 00:04:53.383 LINK interrupt_tgt 00:04:53.383 LINK nvmf_tgt 00:04:53.383 LINK zipf 00:04:53.383 LINK poller_perf 00:04:53.383 LINK iscsi_tgt 00:04:53.641 LINK ioat_perf 00:04:53.641 LINK spdk_tgt 00:04:53.641 LINK spdk_trace_record 00:04:53.641 LINK spdk_trace 00:04:53.641 CC app/spdk_nvme_perf/perf.o 00:04:53.641 CC app/spdk_nvme_identify/identify.o 00:04:53.641 CC app/spdk_nvme_discover/discovery_aer.o 00:04:53.900 CC examples/ioat/verify/verify.o 00:04:53.900 CC app/spdk_top/spdk_top.o 00:04:53.900 CC app/spdk_dd/spdk_dd.o 00:04:53.900 CC test/dma/test_dma/test_dma.o 00:04:53.900 CC app/fio/nvme/fio_plugin.o 00:04:53.900 TEST_HEADER include/spdk/accel.h 00:04:53.900 TEST_HEADER include/spdk/accel_module.h 00:04:53.900 TEST_HEADER include/spdk/assert.h 00:04:53.900 TEST_HEADER include/spdk/barrier.h 00:04:53.900 TEST_HEADER include/spdk/base64.h 00:04:53.900 TEST_HEADER include/spdk/bdev.h 00:04:53.900 TEST_HEADER include/spdk/bdev_module.h 00:04:53.900 TEST_HEADER include/spdk/bdev_zone.h 00:04:53.900 TEST_HEADER include/spdk/bit_array.h 00:04:53.900 TEST_HEADER include/spdk/bit_pool.h 00:04:53.900 TEST_HEADER include/spdk/blob_bdev.h 00:04:53.900 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:53.900 TEST_HEADER include/spdk/blobfs.h 00:04:53.900 TEST_HEADER include/spdk/blob.h 00:04:53.900 TEST_HEADER include/spdk/conf.h 00:04:53.900 TEST_HEADER include/spdk/config.h 00:04:53.900 TEST_HEADER include/spdk/cpuset.h 00:04:53.900 TEST_HEADER include/spdk/crc16.h 00:04:53.900 TEST_HEADER include/spdk/crc32.h 00:04:53.900 TEST_HEADER include/spdk/crc64.h 00:04:53.900 LINK spdk_nvme_discover 00:04:53.900 TEST_HEADER include/spdk/dif.h 00:04:53.900 TEST_HEADER include/spdk/dma.h 00:04:53.900 TEST_HEADER include/spdk/endian.h 00:04:53.900 TEST_HEADER include/spdk/env_dpdk.h 00:04:53.900 TEST_HEADER include/spdk/env.h 00:04:53.900 TEST_HEADER include/spdk/event.h 00:04:53.900 TEST_HEADER include/spdk/fd_group.h 00:04:53.900 TEST_HEADER include/spdk/fd.h 00:04:53.900 TEST_HEADER include/spdk/file.h 00:04:53.900 TEST_HEADER include/spdk/fsdev.h 00:04:53.900 TEST_HEADER include/spdk/fsdev_module.h 00:04:53.900 TEST_HEADER include/spdk/ftl.h 00:04:53.900 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:53.900 TEST_HEADER include/spdk/gpt_spec.h 00:04:53.900 TEST_HEADER include/spdk/hexlify.h 00:04:54.159 TEST_HEADER include/spdk/histogram_data.h 00:04:54.159 TEST_HEADER include/spdk/idxd.h 00:04:54.159 TEST_HEADER include/spdk/idxd_spec.h 00:04:54.159 TEST_HEADER include/spdk/init.h 00:04:54.159 TEST_HEADER include/spdk/ioat.h 00:04:54.159 TEST_HEADER include/spdk/ioat_spec.h 00:04:54.159 TEST_HEADER include/spdk/iscsi_spec.h 00:04:54.159 TEST_HEADER include/spdk/json.h 00:04:54.159 TEST_HEADER include/spdk/jsonrpc.h 00:04:54.159 TEST_HEADER include/spdk/keyring.h 00:04:54.159 TEST_HEADER include/spdk/keyring_module.h 00:04:54.159 TEST_HEADER include/spdk/likely.h 00:04:54.159 LINK verify 00:04:54.159 TEST_HEADER include/spdk/log.h 00:04:54.159 TEST_HEADER include/spdk/lvol.h 00:04:54.159 TEST_HEADER include/spdk/md5.h 00:04:54.159 TEST_HEADER include/spdk/memory.h 00:04:54.159 TEST_HEADER include/spdk/mmio.h 00:04:54.159 TEST_HEADER include/spdk/nbd.h 00:04:54.159 TEST_HEADER include/spdk/net.h 00:04:54.159 TEST_HEADER include/spdk/notify.h 00:04:54.159 TEST_HEADER include/spdk/nvme.h 00:04:54.159 TEST_HEADER include/spdk/nvme_intel.h 00:04:54.159 CC test/app/bdev_svc/bdev_svc.o 00:04:54.159 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:54.159 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:54.159 TEST_HEADER include/spdk/nvme_spec.h 00:04:54.159 TEST_HEADER include/spdk/nvme_zns.h 00:04:54.159 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:54.159 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:54.159 TEST_HEADER include/spdk/nvmf.h 00:04:54.159 TEST_HEADER include/spdk/nvmf_spec.h 00:04:54.159 TEST_HEADER include/spdk/nvmf_transport.h 00:04:54.159 TEST_HEADER include/spdk/opal.h 00:04:54.159 TEST_HEADER include/spdk/opal_spec.h 00:04:54.159 TEST_HEADER include/spdk/pci_ids.h 00:04:54.159 TEST_HEADER include/spdk/pipe.h 00:04:54.159 TEST_HEADER include/spdk/queue.h 00:04:54.159 TEST_HEADER include/spdk/reduce.h 00:04:54.159 TEST_HEADER include/spdk/rpc.h 00:04:54.159 TEST_HEADER include/spdk/scheduler.h 00:04:54.159 TEST_HEADER include/spdk/scsi.h 00:04:54.159 TEST_HEADER include/spdk/scsi_spec.h 00:04:54.159 TEST_HEADER include/spdk/sock.h 00:04:54.159 TEST_HEADER include/spdk/stdinc.h 00:04:54.159 TEST_HEADER include/spdk/string.h 00:04:54.159 TEST_HEADER include/spdk/thread.h 00:04:54.159 TEST_HEADER include/spdk/trace.h 00:04:54.159 TEST_HEADER include/spdk/trace_parser.h 00:04:54.159 TEST_HEADER include/spdk/tree.h 00:04:54.159 TEST_HEADER include/spdk/ublk.h 00:04:54.159 TEST_HEADER include/spdk/util.h 00:04:54.159 TEST_HEADER include/spdk/uuid.h 00:04:54.159 TEST_HEADER include/spdk/version.h 00:04:54.159 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:54.159 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:54.159 TEST_HEADER include/spdk/vhost.h 00:04:54.159 TEST_HEADER include/spdk/vmd.h 00:04:54.159 TEST_HEADER include/spdk/xor.h 00:04:54.159 TEST_HEADER include/spdk/zipf.h 00:04:54.159 CXX test/cpp_headers/accel.o 00:04:54.417 LINK bdev_svc 00:04:54.417 CC app/vhost/vhost.o 00:04:54.417 LINK spdk_dd 00:04:54.417 CXX test/cpp_headers/accel_module.o 00:04:54.417 LINK test_dma 00:04:54.417 CC examples/thread/thread/thread_ex.o 00:04:54.676 CXX test/cpp_headers/assert.o 00:04:54.676 LINK vhost 00:04:54.676 CXX test/cpp_headers/barrier.o 00:04:54.676 LINK spdk_nvme 00:04:54.676 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:54.676 LINK spdk_nvme_perf 00:04:54.676 CXX test/cpp_headers/base64.o 00:04:54.936 CXX test/cpp_headers/bdev.o 00:04:54.936 CXX test/cpp_headers/bdev_module.o 00:04:54.936 CXX test/cpp_headers/bdev_zone.o 00:04:54.936 LINK thread 00:04:54.936 CC app/fio/bdev/fio_plugin.o 00:04:54.936 LINK spdk_nvme_identify 00:04:54.936 CXX test/cpp_headers/bit_array.o 00:04:54.936 LINK spdk_top 00:04:55.194 CXX test/cpp_headers/bit_pool.o 00:04:55.194 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:55.194 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:55.194 CXX test/cpp_headers/blob_bdev.o 00:04:55.194 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:55.194 LINK nvme_fuzz 00:04:55.194 CC examples/sock/hello_world/hello_sock.o 00:04:55.453 CC test/app/histogram_perf/histogram_perf.o 00:04:55.453 CXX test/cpp_headers/blobfs_bdev.o 00:04:55.453 CC test/app/jsoncat/jsoncat.o 00:04:55.453 CC examples/vmd/lsvmd/lsvmd.o 00:04:55.453 LINK histogram_perf 00:04:55.453 CC test/env/mem_callbacks/mem_callbacks.o 00:04:55.453 LINK jsoncat 00:04:55.453 CC test/env/vtophys/vtophys.o 00:04:55.453 LINK spdk_bdev 00:04:55.453 LINK hello_sock 00:04:55.710 CXX test/cpp_headers/blobfs.o 00:04:55.710 LINK vhost_fuzz 00:04:55.710 LINK lsvmd 00:04:55.710 CXX test/cpp_headers/blob.o 00:04:55.710 LINK vtophys 00:04:55.710 CXX test/cpp_headers/conf.o 00:04:55.710 CC test/app/stub/stub.o 00:04:55.969 CXX test/cpp_headers/config.o 00:04:55.969 CC test/event/event_perf/event_perf.o 00:04:55.969 CXX test/cpp_headers/cpuset.o 00:04:55.969 CC examples/vmd/led/led.o 00:04:55.969 CC test/rpc_client/rpc_client_test.o 00:04:55.969 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:55.969 LINK stub 00:04:55.969 CC test/nvme/aer/aer.o 00:04:56.227 LINK event_perf 00:04:56.227 CC test/accel/dif/dif.o 00:04:56.227 LINK mem_callbacks 00:04:56.227 LINK rpc_client_test 00:04:56.227 LINK led 00:04:56.227 CXX test/cpp_headers/crc16.o 00:04:56.227 LINK env_dpdk_post_init 00:04:56.485 CC test/env/memory/memory_ut.o 00:04:56.485 CC test/event/reactor/reactor.o 00:04:56.485 CXX test/cpp_headers/crc32.o 00:04:56.485 CC test/env/pci/pci_ut.o 00:04:56.485 LINK aer 00:04:56.485 CXX test/cpp_headers/crc64.o 00:04:56.485 LINK reactor 00:04:56.485 CC examples/idxd/perf/perf.o 00:04:56.743 CC test/blobfs/mkfs/mkfs.o 00:04:56.743 CXX test/cpp_headers/dif.o 00:04:56.743 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:56.743 CC test/event/reactor_perf/reactor_perf.o 00:04:56.743 CC test/nvme/reset/reset.o 00:04:57.001 LINK mkfs 00:04:57.001 CXX test/cpp_headers/dma.o 00:04:57.001 LINK pci_ut 00:04:57.001 LINK reactor_perf 00:04:57.001 LINK dif 00:04:57.001 LINK idxd_perf 00:04:57.001 CXX test/cpp_headers/endian.o 00:04:57.259 LINK reset 00:04:57.259 LINK hello_fsdev 00:04:57.259 CXX test/cpp_headers/env_dpdk.o 00:04:57.259 CC test/event/app_repeat/app_repeat.o 00:04:57.259 LINK iscsi_fuzz 00:04:57.517 CC examples/accel/perf/accel_perf.o 00:04:57.517 CXX test/cpp_headers/env.o 00:04:57.517 CC test/nvme/sgl/sgl.o 00:04:57.517 CC examples/blob/hello_world/hello_blob.o 00:04:57.517 LINK app_repeat 00:04:57.517 CC test/lvol/esnap/esnap.o 00:04:57.517 CC examples/blob/cli/blobcli.o 00:04:57.775 CC examples/nvme/hello_world/hello_world.o 00:04:57.775 CXX test/cpp_headers/event.o 00:04:57.775 LINK hello_blob 00:04:57.775 CC test/event/scheduler/scheduler.o 00:04:57.775 LINK sgl 00:04:57.775 CC test/bdev/bdevio/bdevio.o 00:04:57.775 CXX test/cpp_headers/fd_group.o 00:04:58.082 LINK hello_world 00:04:58.082 LINK memory_ut 00:04:58.082 CXX test/cpp_headers/fd.o 00:04:58.082 LINK scheduler 00:04:58.082 CC test/nvme/e2edp/nvme_dp.o 00:04:58.082 LINK blobcli 00:04:58.340 CC examples/nvme/reconnect/reconnect.o 00:04:58.340 CXX test/cpp_headers/file.o 00:04:58.340 LINK accel_perf 00:04:58.340 LINK bdevio 00:04:58.340 CXX test/cpp_headers/fsdev.o 00:04:58.340 CXX test/cpp_headers/fsdev_module.o 00:04:58.340 CC test/nvme/overhead/overhead.o 00:04:58.340 CXX test/cpp_headers/ftl.o 00:04:58.340 CXX test/cpp_headers/fuse_dispatcher.o 00:04:58.598 CXX test/cpp_headers/gpt_spec.o 00:04:58.598 LINK nvme_dp 00:04:58.598 CXX test/cpp_headers/hexlify.o 00:04:58.598 LINK reconnect 00:04:58.598 CC test/nvme/startup/startup.o 00:04:58.855 CC test/nvme/err_injection/err_injection.o 00:04:58.855 CXX test/cpp_headers/histogram_data.o 00:04:58.855 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:58.855 LINK overhead 00:04:58.855 CC test/nvme/reserve/reserve.o 00:04:58.855 CC examples/nvme/arbitration/arbitration.o 00:04:58.855 CC examples/bdev/hello_world/hello_bdev.o 00:04:58.855 LINK startup 00:04:58.855 CXX test/cpp_headers/idxd.o 00:04:58.855 LINK err_injection 00:04:59.113 CC examples/nvme/hotplug/hotplug.o 00:04:59.113 LINK reserve 00:04:59.113 CC examples/bdev/bdevperf/bdevperf.o 00:04:59.113 CXX test/cpp_headers/idxd_spec.o 00:04:59.113 LINK hello_bdev 00:04:59.113 LINK arbitration 00:04:59.371 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:59.371 CXX test/cpp_headers/init.o 00:04:59.371 LINK hotplug 00:04:59.371 CC test/nvme/simple_copy/simple_copy.o 00:04:59.371 CC test/nvme/connect_stress/connect_stress.o 00:04:59.371 LINK nvme_manage 00:04:59.371 CXX test/cpp_headers/ioat.o 00:04:59.371 LINK cmb_copy 00:04:59.371 CC test/nvme/boot_partition/boot_partition.o 00:04:59.629 CC test/nvme/compliance/nvme_compliance.o 00:04:59.629 CC examples/nvme/abort/abort.o 00:04:59.629 LINK connect_stress 00:04:59.629 LINK simple_copy 00:04:59.629 CXX test/cpp_headers/ioat_spec.o 00:04:59.629 CXX test/cpp_headers/iscsi_spec.o 00:04:59.629 LINK boot_partition 00:04:59.629 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:59.887 CXX test/cpp_headers/json.o 00:04:59.888 CC test/nvme/fused_ordering/fused_ordering.o 00:04:59.888 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:59.888 CC test/nvme/fdp/fdp.o 00:04:59.888 LINK pmr_persistence 00:04:59.888 LINK nvme_compliance 00:04:59.888 LINK abort 00:05:00.147 CC test/nvme/cuse/cuse.o 00:05:00.147 CXX test/cpp_headers/jsonrpc.o 00:05:00.147 LINK bdevperf 00:05:00.147 LINK fused_ordering 00:05:00.147 LINK doorbell_aers 00:05:00.147 CXX test/cpp_headers/keyring.o 00:05:00.147 CXX test/cpp_headers/keyring_module.o 00:05:00.147 CXX test/cpp_headers/likely.o 00:05:00.147 CXX test/cpp_headers/log.o 00:05:00.407 CXX test/cpp_headers/lvol.o 00:05:00.407 CXX test/cpp_headers/md5.o 00:05:00.407 LINK fdp 00:05:00.407 CXX test/cpp_headers/memory.o 00:05:00.407 CXX test/cpp_headers/mmio.o 00:05:00.407 CXX test/cpp_headers/nbd.o 00:05:00.407 CXX test/cpp_headers/net.o 00:05:00.407 CXX test/cpp_headers/notify.o 00:05:00.407 CXX test/cpp_headers/nvme.o 00:05:00.665 CXX test/cpp_headers/nvme_intel.o 00:05:00.665 CXX test/cpp_headers/nvme_ocssd.o 00:05:00.665 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:00.665 CXX test/cpp_headers/nvme_spec.o 00:05:00.665 CXX test/cpp_headers/nvme_zns.o 00:05:00.665 CC examples/nvmf/nvmf/nvmf.o 00:05:00.665 CXX test/cpp_headers/nvmf_cmd.o 00:05:00.665 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:00.665 CXX test/cpp_headers/nvmf.o 00:05:00.665 CXX test/cpp_headers/nvmf_spec.o 00:05:00.924 CXX test/cpp_headers/nvmf_transport.o 00:05:00.924 CXX test/cpp_headers/opal.o 00:05:00.924 CXX test/cpp_headers/opal_spec.o 00:05:00.924 CXX test/cpp_headers/pci_ids.o 00:05:00.924 CXX test/cpp_headers/pipe.o 00:05:00.924 CXX test/cpp_headers/queue.o 00:05:00.924 LINK nvmf 00:05:00.924 CXX test/cpp_headers/reduce.o 00:05:00.924 CXX test/cpp_headers/rpc.o 00:05:00.924 CXX test/cpp_headers/scheduler.o 00:05:00.924 CXX test/cpp_headers/scsi.o 00:05:00.924 CXX test/cpp_headers/scsi_spec.o 00:05:00.924 CXX test/cpp_headers/sock.o 00:05:01.183 CXX test/cpp_headers/stdinc.o 00:05:01.183 CXX test/cpp_headers/string.o 00:05:01.183 CXX test/cpp_headers/thread.o 00:05:01.183 CXX test/cpp_headers/trace.o 00:05:01.183 CXX test/cpp_headers/trace_parser.o 00:05:01.183 CXX test/cpp_headers/tree.o 00:05:01.183 CXX test/cpp_headers/ublk.o 00:05:01.183 CXX test/cpp_headers/util.o 00:05:01.183 CXX test/cpp_headers/uuid.o 00:05:01.183 CXX test/cpp_headers/version.o 00:05:01.442 CXX test/cpp_headers/vfio_user_pci.o 00:05:01.442 CXX test/cpp_headers/vfio_user_spec.o 00:05:01.442 CXX test/cpp_headers/vhost.o 00:05:01.442 CXX test/cpp_headers/vmd.o 00:05:01.442 CXX test/cpp_headers/xor.o 00:05:01.442 CXX test/cpp_headers/zipf.o 00:05:01.702 LINK cuse 00:05:04.996 LINK esnap 00:05:05.254 00:05:05.254 real 1m36.518s 00:05:05.254 user 8m38.288s 00:05:05.254 sys 1m50.195s 00:05:05.254 11:21:04 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:05.254 ************************************ 00:05:05.254 END TEST make 00:05:05.254 ************************************ 00:05:05.254 11:21:04 make -- common/autotest_common.sh@10 -- $ set +x 00:05:05.254 11:21:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:05.254 11:21:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:05.254 11:21:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:05.254 11:21:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.254 11:21:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:05.254 11:21:04 -- pm/common@44 -- $ pid=5465 00:05:05.254 11:21:04 -- pm/common@50 -- $ kill -TERM 5465 00:05:05.254 11:21:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.254 11:21:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:05.254 11:21:04 -- pm/common@44 -- $ pid=5467 00:05:05.254 11:21:04 -- pm/common@50 -- $ kill -TERM 5467 00:05:05.254 11:21:04 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:05.254 11:21:04 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:05.254 11:21:04 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:05.254 11:21:04 -- common/autotest_common.sh@1691 -- # lcov --version 00:05:05.254 11:21:04 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:05.512 11:21:04 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:05.512 11:21:04 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.512 11:21:04 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.512 11:21:04 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.512 11:21:04 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.512 11:21:04 -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.512 11:21:04 -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.512 11:21:04 -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.512 11:21:04 -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.512 11:21:04 -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.512 11:21:04 -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.512 11:21:04 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.512 11:21:04 -- scripts/common.sh@344 -- # case "$op" in 00:05:05.512 11:21:04 -- scripts/common.sh@345 -- # : 1 00:05:05.512 11:21:04 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.512 11:21:04 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.512 11:21:04 -- scripts/common.sh@365 -- # decimal 1 00:05:05.512 11:21:04 -- scripts/common.sh@353 -- # local d=1 00:05:05.512 11:21:04 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.512 11:21:04 -- scripts/common.sh@355 -- # echo 1 00:05:05.512 11:21:04 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.512 11:21:04 -- scripts/common.sh@366 -- # decimal 2 00:05:05.512 11:21:04 -- scripts/common.sh@353 -- # local d=2 00:05:05.512 11:21:04 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.512 11:21:04 -- scripts/common.sh@355 -- # echo 2 00:05:05.512 11:21:04 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.512 11:21:04 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.512 11:21:04 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.512 11:21:04 -- scripts/common.sh@368 -- # return 0 00:05:05.512 11:21:04 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.512 11:21:04 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:05.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.512 --rc genhtml_branch_coverage=1 00:05:05.512 --rc genhtml_function_coverage=1 00:05:05.512 --rc genhtml_legend=1 00:05:05.512 --rc geninfo_all_blocks=1 00:05:05.512 --rc geninfo_unexecuted_blocks=1 00:05:05.512 00:05:05.512 ' 00:05:05.512 11:21:04 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:05.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.512 --rc genhtml_branch_coverage=1 00:05:05.512 --rc genhtml_function_coverage=1 00:05:05.512 --rc genhtml_legend=1 00:05:05.512 --rc geninfo_all_blocks=1 00:05:05.512 --rc geninfo_unexecuted_blocks=1 00:05:05.512 00:05:05.512 ' 00:05:05.512 11:21:04 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:05.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.512 --rc genhtml_branch_coverage=1 00:05:05.512 --rc genhtml_function_coverage=1 00:05:05.512 --rc genhtml_legend=1 00:05:05.512 --rc geninfo_all_blocks=1 00:05:05.512 --rc geninfo_unexecuted_blocks=1 00:05:05.512 00:05:05.512 ' 00:05:05.512 11:21:04 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:05.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.512 --rc genhtml_branch_coverage=1 00:05:05.512 --rc genhtml_function_coverage=1 00:05:05.512 --rc genhtml_legend=1 00:05:05.512 --rc geninfo_all_blocks=1 00:05:05.512 --rc geninfo_unexecuted_blocks=1 00:05:05.512 00:05:05.512 ' 00:05:05.512 11:21:04 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:05.512 11:21:04 -- nvmf/common.sh@7 -- # uname -s 00:05:05.512 11:21:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.512 11:21:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.512 11:21:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.512 11:21:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.512 11:21:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.512 11:21:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.512 11:21:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.512 11:21:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.512 11:21:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.512 11:21:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.512 11:21:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:634d121f-067e-4552-bd3e-8aec06a10c48 00:05:05.512 11:21:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=634d121f-067e-4552-bd3e-8aec06a10c48 00:05:05.512 11:21:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.512 11:21:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.512 11:21:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:05.512 11:21:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.512 11:21:04 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:05.512 11:21:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:05.512 11:21:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.512 11:21:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.512 11:21:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.512 11:21:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.512 11:21:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.513 11:21:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.513 11:21:04 -- paths/export.sh@5 -- # export PATH 00:05:05.513 11:21:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.513 11:21:04 -- nvmf/common.sh@51 -- # : 0 00:05:05.513 11:21:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:05.513 11:21:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:05.513 11:21:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.513 11:21:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.513 11:21:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.513 11:21:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:05.513 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:05.513 11:21:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:05.513 11:21:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:05.513 11:21:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:05.513 11:21:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:05.513 11:21:04 -- spdk/autotest.sh@32 -- # uname -s 00:05:05.513 11:21:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:05.513 11:21:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:05.513 11:21:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:05.513 11:21:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:05.513 11:21:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:05.513 11:21:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:05.513 11:21:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:05.513 11:21:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:05.513 11:21:04 -- spdk/autotest.sh@48 -- # udevadm_pid=54560 00:05:05.513 11:21:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:05.513 11:21:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:05.513 11:21:04 -- pm/common@17 -- # local monitor 00:05:05.513 11:21:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.513 11:21:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.513 11:21:04 -- pm/common@21 -- # date +%s 00:05:05.513 11:21:04 -- pm/common@25 -- # sleep 1 00:05:05.513 11:21:04 -- pm/common@21 -- # date +%s 00:05:05.513 11:21:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730805664 00:05:05.513 11:21:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730805664 00:05:05.513 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730805664_collect-cpu-load.pm.log 00:05:05.513 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730805664_collect-vmstat.pm.log 00:05:06.886 11:21:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:06.886 11:21:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:06.886 11:21:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:06.886 11:21:05 -- common/autotest_common.sh@10 -- # set +x 00:05:06.886 11:21:05 -- spdk/autotest.sh@59 -- # create_test_list 00:05:06.886 11:21:05 -- common/autotest_common.sh@750 -- # xtrace_disable 00:05:06.886 11:21:05 -- common/autotest_common.sh@10 -- # set +x 00:05:06.886 11:21:05 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:06.886 11:21:05 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:06.886 11:21:05 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:06.886 11:21:05 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:06.886 11:21:05 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:06.886 11:21:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:06.886 11:21:05 -- common/autotest_common.sh@1455 -- # uname 00:05:06.886 11:21:05 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:06.886 11:21:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:06.886 11:21:05 -- common/autotest_common.sh@1475 -- # uname 00:05:06.886 11:21:05 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:06.886 11:21:05 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:06.886 11:21:05 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:06.886 lcov: LCOV version 1.15 00:05:06.886 11:21:05 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:24.955 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:24.955 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:39.828 11:21:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:39.828 11:21:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:39.828 11:21:37 -- common/autotest_common.sh@10 -- # set +x 00:05:39.828 11:21:37 -- spdk/autotest.sh@78 -- # rm -f 00:05:39.828 11:21:37 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.828 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.828 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:39.828 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:39.828 11:21:38 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:39.828 11:21:38 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:39.828 11:21:38 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:39.828 11:21:38 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:39.828 11:21:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:39.828 11:21:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:39.828 11:21:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:39.828 11:21:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:39.828 11:21:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:39.828 11:21:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:39.828 11:21:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:39.828 11:21:38 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:39.828 11:21:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:39.828 11:21:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:39.828 11:21:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:39.828 11:21:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:39.828 11:21:38 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:39.828 11:21:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:39.828 11:21:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:39.828 11:21:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:39.828 11:21:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:39.828 11:21:38 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:39.828 11:21:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:39.828 11:21:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:39.828 11:21:38 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:39.828 11:21:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:39.828 11:21:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:39.828 11:21:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:39.828 11:21:38 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:39.828 11:21:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:39.828 No valid GPT data, bailing 00:05:39.828 11:21:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:39.828 11:21:38 -- scripts/common.sh@394 -- # pt= 00:05:39.828 11:21:38 -- scripts/common.sh@395 -- # return 1 00:05:39.828 11:21:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:39.828 1+0 records in 00:05:39.828 1+0 records out 00:05:39.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00677001 s, 155 MB/s 00:05:39.828 11:21:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:39.828 11:21:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:39.828 11:21:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:39.828 11:21:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:39.829 11:21:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:39.829 No valid GPT data, bailing 00:05:39.829 11:21:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:39.829 11:21:38 -- scripts/common.sh@394 -- # pt= 00:05:39.829 11:21:38 -- scripts/common.sh@395 -- # return 1 00:05:39.829 11:21:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:39.829 1+0 records in 00:05:39.829 1+0 records out 00:05:39.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00918823 s, 114 MB/s 00:05:39.829 11:21:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:39.829 11:21:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:39.829 11:21:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:39.829 11:21:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:39.829 11:21:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:39.829 No valid GPT data, bailing 00:05:39.829 11:21:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:39.829 11:21:38 -- scripts/common.sh@394 -- # pt= 00:05:39.829 11:21:38 -- scripts/common.sh@395 -- # return 1 00:05:39.829 11:21:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:39.829 1+0 records in 00:05:39.829 1+0 records out 00:05:39.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00636013 s, 165 MB/s 00:05:39.829 11:21:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:39.829 11:21:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:39.829 11:21:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:39.829 11:21:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:39.829 11:21:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:39.829 No valid GPT data, bailing 00:05:39.829 11:21:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:39.829 11:21:38 -- scripts/common.sh@394 -- # pt= 00:05:39.829 11:21:38 -- scripts/common.sh@395 -- # return 1 00:05:39.829 11:21:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:39.829 1+0 records in 00:05:39.829 1+0 records out 00:05:39.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00641762 s, 163 MB/s 00:05:39.829 11:21:38 -- spdk/autotest.sh@105 -- # sync 00:05:39.829 11:21:38 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:39.829 11:21:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:39.829 11:21:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:43.118 11:21:41 -- spdk/autotest.sh@111 -- # uname -s 00:05:43.118 11:21:41 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:43.118 11:21:41 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:43.118 11:21:41 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:43.377 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.377 Hugepages 00:05:43.377 node hugesize free / total 00:05:43.377 node0 1048576kB 0 / 0 00:05:43.377 node0 2048kB 0 / 0 00:05:43.377 00:05:43.377 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:43.637 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:43.637 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:43.896 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:43.896 11:21:43 -- spdk/autotest.sh@117 -- # uname -s 00:05:43.896 11:21:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:43.896 11:21:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:43.896 11:21:43 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.832 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.832 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.832 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.832 11:21:44 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:46.210 11:21:45 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:46.210 11:21:45 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:46.210 11:21:45 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:46.210 11:21:45 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:46.210 11:21:45 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:46.210 11:21:45 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:46.210 11:21:45 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:46.210 11:21:45 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:46.210 11:21:45 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:46.210 11:21:45 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:46.210 11:21:45 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:46.210 11:21:45 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:46.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.469 Waiting for block devices as requested 00:05:46.728 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:46.728 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:46.728 11:21:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:46.728 11:21:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:46.728 11:21:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:46.728 11:21:45 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:46.728 11:21:45 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:46.728 11:21:45 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:46.728 11:21:45 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:46.728 11:21:45 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:46.728 11:21:45 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:46.728 11:21:45 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:46.728 11:21:45 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:46.728 11:21:45 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:46.728 11:21:45 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:46.728 11:21:45 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:46.728 11:21:45 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:46.728 11:21:45 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:46.728 11:21:45 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:46.728 11:21:45 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:46.728 11:21:45 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:46.728 11:21:45 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:46.729 11:21:45 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:46.729 11:21:45 -- common/autotest_common.sh@1541 -- # continue 00:05:46.729 11:21:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:46.729 11:21:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:46.729 11:21:45 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:46.729 11:21:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:46.988 11:21:46 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:46.988 11:21:46 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:46.988 11:21:46 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:46.988 11:21:46 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:46.988 11:21:46 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:46.988 11:21:46 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:46.988 11:21:46 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:46.988 11:21:46 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:46.988 11:21:46 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:46.988 11:21:46 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:46.988 11:21:46 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:46.988 11:21:46 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:46.988 11:21:46 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:46.988 11:21:46 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:46.988 11:21:46 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:46.988 11:21:46 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:46.988 11:21:46 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:46.988 11:21:46 -- common/autotest_common.sh@1541 -- # continue 00:05:46.988 11:21:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:46.988 11:21:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:46.988 11:21:46 -- common/autotest_common.sh@10 -- # set +x 00:05:46.988 11:21:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:46.988 11:21:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:46.988 11:21:46 -- common/autotest_common.sh@10 -- # set +x 00:05:46.988 11:21:46 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:47.925 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.925 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.925 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.925 11:21:47 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:47.925 11:21:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:47.925 11:21:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.184 11:21:47 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:48.184 11:21:47 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:48.184 11:21:47 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:48.184 11:21:47 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:48.184 11:21:47 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:48.184 11:21:47 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:48.184 11:21:47 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:48.184 11:21:47 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:48.184 11:21:47 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:48.184 11:21:47 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:48.184 11:21:47 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:48.184 11:21:47 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:48.184 11:21:47 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:48.184 11:21:47 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:48.184 11:21:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:48.184 11:21:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:48.184 11:21:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:48.184 11:21:47 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:48.184 11:21:47 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:48.184 11:21:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:48.184 11:21:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:48.184 11:21:47 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:48.184 11:21:47 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:48.184 11:21:47 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:48.184 11:21:47 -- common/autotest_common.sh@1570 -- # return 0 00:05:48.184 11:21:47 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:48.184 11:21:47 -- common/autotest_common.sh@1578 -- # return 0 00:05:48.184 11:21:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:48.184 11:21:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:48.184 11:21:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:48.184 11:21:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:48.185 11:21:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:48.185 11:21:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:48.185 11:21:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.185 11:21:47 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:48.185 11:21:47 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:48.185 11:21:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.185 11:21:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.185 11:21:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.185 ************************************ 00:05:48.185 START TEST env 00:05:48.185 ************************************ 00:05:48.185 11:21:47 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:48.444 * Looking for test storage... 00:05:48.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:48.444 11:21:47 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:48.444 11:21:47 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:48.444 11:21:47 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:48.444 11:21:47 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:48.444 11:21:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.444 11:21:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.444 11:21:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.444 11:21:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.444 11:21:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.444 11:21:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.444 11:21:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.444 11:21:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.444 11:21:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.444 11:21:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.444 11:21:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.444 11:21:47 env -- scripts/common.sh@344 -- # case "$op" in 00:05:48.444 11:21:47 env -- scripts/common.sh@345 -- # : 1 00:05:48.444 11:21:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.444 11:21:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.444 11:21:47 env -- scripts/common.sh@365 -- # decimal 1 00:05:48.444 11:21:47 env -- scripts/common.sh@353 -- # local d=1 00:05:48.444 11:21:47 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.444 11:21:47 env -- scripts/common.sh@355 -- # echo 1 00:05:48.444 11:21:47 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.444 11:21:47 env -- scripts/common.sh@366 -- # decimal 2 00:05:48.444 11:21:47 env -- scripts/common.sh@353 -- # local d=2 00:05:48.444 11:21:47 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.444 11:21:47 env -- scripts/common.sh@355 -- # echo 2 00:05:48.444 11:21:47 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.444 11:21:47 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.444 11:21:47 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.444 11:21:47 env -- scripts/common.sh@368 -- # return 0 00:05:48.444 11:21:47 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.444 11:21:47 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:48.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.444 --rc genhtml_branch_coverage=1 00:05:48.444 --rc genhtml_function_coverage=1 00:05:48.444 --rc genhtml_legend=1 00:05:48.444 --rc geninfo_all_blocks=1 00:05:48.444 --rc geninfo_unexecuted_blocks=1 00:05:48.444 00:05:48.444 ' 00:05:48.444 11:21:47 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:48.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.444 --rc genhtml_branch_coverage=1 00:05:48.444 --rc genhtml_function_coverage=1 00:05:48.444 --rc genhtml_legend=1 00:05:48.444 --rc geninfo_all_blocks=1 00:05:48.444 --rc geninfo_unexecuted_blocks=1 00:05:48.444 00:05:48.444 ' 00:05:48.444 11:21:47 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:48.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.444 --rc genhtml_branch_coverage=1 00:05:48.444 --rc genhtml_function_coverage=1 00:05:48.444 --rc genhtml_legend=1 00:05:48.444 --rc geninfo_all_blocks=1 00:05:48.444 --rc geninfo_unexecuted_blocks=1 00:05:48.444 00:05:48.444 ' 00:05:48.444 11:21:47 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:48.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.444 --rc genhtml_branch_coverage=1 00:05:48.444 --rc genhtml_function_coverage=1 00:05:48.444 --rc genhtml_legend=1 00:05:48.444 --rc geninfo_all_blocks=1 00:05:48.444 --rc geninfo_unexecuted_blocks=1 00:05:48.444 00:05:48.444 ' 00:05:48.444 11:21:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:48.444 11:21:47 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.444 11:21:47 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.444 11:21:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.444 ************************************ 00:05:48.444 START TEST env_memory 00:05:48.444 ************************************ 00:05:48.444 11:21:47 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:48.444 00:05:48.444 00:05:48.444 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.444 http://cunit.sourceforge.net/ 00:05:48.444 00:05:48.444 00:05:48.444 Suite: memory 00:05:48.444 Test: alloc and free memory map ...[2024-11-05 11:21:47.673996] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:48.444 passed 00:05:48.703 Test: mem map translation ...[2024-11-05 11:21:47.721184] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:48.703 [2024-11-05 11:21:47.721283] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:48.703 [2024-11-05 11:21:47.721352] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:48.703 [2024-11-05 11:21:47.721373] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:48.703 passed 00:05:48.703 Test: mem map registration ...[2024-11-05 11:21:47.788428] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:48.703 [2024-11-05 11:21:47.788495] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:48.703 passed 00:05:48.703 Test: mem map adjacent registrations ...passed 00:05:48.703 00:05:48.703 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.703 suites 1 1 n/a 0 0 00:05:48.703 tests 4 4 4 0 0 00:05:48.703 asserts 152 152 152 0 n/a 00:05:48.703 00:05:48.703 Elapsed time = 0.254 seconds 00:05:48.703 00:05:48.703 real 0m0.314s 00:05:48.703 user 0m0.261s 00:05:48.703 sys 0m0.039s 00:05:48.703 11:21:47 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:48.703 11:21:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:48.703 ************************************ 00:05:48.703 END TEST env_memory 00:05:48.703 ************************************ 00:05:48.703 11:21:47 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:48.703 11:21:47 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.703 11:21:47 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.703 11:21:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.703 ************************************ 00:05:48.703 START TEST env_vtophys 00:05:48.703 ************************************ 00:05:48.703 11:21:47 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:48.962 EAL: lib.eal log level changed from notice to debug 00:05:48.962 EAL: Detected lcore 0 as core 0 on socket 0 00:05:48.962 EAL: Detected lcore 1 as core 0 on socket 0 00:05:48.962 EAL: Detected lcore 2 as core 0 on socket 0 00:05:48.962 EAL: Detected lcore 3 as core 0 on socket 0 00:05:48.963 EAL: Detected lcore 4 as core 0 on socket 0 00:05:48.963 EAL: Detected lcore 5 as core 0 on socket 0 00:05:48.963 EAL: Detected lcore 6 as core 0 on socket 0 00:05:48.963 EAL: Detected lcore 7 as core 0 on socket 0 00:05:48.963 EAL: Detected lcore 8 as core 0 on socket 0 00:05:48.963 EAL: Detected lcore 9 as core 0 on socket 0 00:05:48.963 EAL: Maximum logical cores by configuration: 128 00:05:48.963 EAL: Detected CPU lcores: 10 00:05:48.963 EAL: Detected NUMA nodes: 1 00:05:48.963 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:48.963 EAL: Detected shared linkage of DPDK 00:05:48.963 EAL: No shared files mode enabled, IPC will be disabled 00:05:48.963 EAL: Selected IOVA mode 'PA' 00:05:48.963 EAL: Probing VFIO support... 00:05:48.963 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:48.963 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:48.963 EAL: Ask a virtual area of 0x2e000 bytes 00:05:48.963 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:48.963 EAL: Setting up physically contiguous memory... 00:05:48.963 EAL: Setting maximum number of open files to 524288 00:05:48.963 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:48.963 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:48.963 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.963 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:48.963 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.963 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.963 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:48.963 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:48.963 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.963 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:48.963 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.963 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.963 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:48.963 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:48.963 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.963 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:48.963 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.963 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.963 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:48.963 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:48.963 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.963 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:48.963 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.963 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.963 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:48.963 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:48.963 EAL: Hugepages will be freed exactly as allocated. 00:05:48.963 EAL: No shared files mode enabled, IPC is disabled 00:05:48.963 EAL: No shared files mode enabled, IPC is disabled 00:05:48.963 EAL: TSC frequency is ~2290000 KHz 00:05:48.963 EAL: Main lcore 0 is ready (tid=7fd6bf8efa40;cpuset=[0]) 00:05:48.963 EAL: Trying to obtain current memory policy. 00:05:48.963 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.963 EAL: Restoring previous memory policy: 0 00:05:48.963 EAL: request: mp_malloc_sync 00:05:48.963 EAL: No shared files mode enabled, IPC is disabled 00:05:48.963 EAL: Heap on socket 0 was expanded by 2MB 00:05:48.963 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:48.963 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:48.963 EAL: Mem event callback 'spdk:(nil)' registered 00:05:48.963 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:48.963 00:05:48.963 00:05:48.963 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.963 http://cunit.sourceforge.net/ 00:05:48.963 00:05:48.963 00:05:48.963 Suite: components_suite 00:05:49.530 Test: vtophys_malloc_test ...passed 00:05:49.531 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:49.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.531 EAL: Restoring previous memory policy: 4 00:05:49.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.531 EAL: request: mp_malloc_sync 00:05:49.531 EAL: No shared files mode enabled, IPC is disabled 00:05:49.531 EAL: Heap on socket 0 was expanded by 4MB 00:05:49.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.531 EAL: request: mp_malloc_sync 00:05:49.531 EAL: No shared files mode enabled, IPC is disabled 00:05:49.531 EAL: Heap on socket 0 was shrunk by 4MB 00:05:49.531 EAL: Trying to obtain current memory policy. 00:05:49.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.531 EAL: Restoring previous memory policy: 4 00:05:49.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.531 EAL: request: mp_malloc_sync 00:05:49.531 EAL: No shared files mode enabled, IPC is disabled 00:05:49.531 EAL: Heap on socket 0 was expanded by 6MB 00:05:49.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.531 EAL: request: mp_malloc_sync 00:05:49.531 EAL: No shared files mode enabled, IPC is disabled 00:05:49.531 EAL: Heap on socket 0 was shrunk by 6MB 00:05:49.531 EAL: Trying to obtain current memory policy. 00:05:49.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.531 EAL: Restoring previous memory policy: 4 00:05:49.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.531 EAL: request: mp_malloc_sync 00:05:49.531 EAL: No shared files mode enabled, IPC is disabled 00:05:49.531 EAL: Heap on socket 0 was expanded by 10MB 00:05:49.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.531 EAL: request: mp_malloc_sync 00:05:49.531 EAL: No shared files mode enabled, IPC is disabled 00:05:49.531 EAL: Heap on socket 0 was shrunk by 10MB 00:05:49.531 EAL: Trying to obtain current memory policy. 00:05:49.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.531 EAL: Restoring previous memory policy: 4 00:05:49.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.531 EAL: request: mp_malloc_sync 00:05:49.531 EAL: No shared files mode enabled, IPC is disabled 00:05:49.531 EAL: Heap on socket 0 was expanded by 18MB 00:05:49.789 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.789 EAL: request: mp_malloc_sync 00:05:49.789 EAL: No shared files mode enabled, IPC is disabled 00:05:49.789 EAL: Heap on socket 0 was shrunk by 18MB 00:05:49.789 EAL: Trying to obtain current memory policy. 00:05:49.789 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.789 EAL: Restoring previous memory policy: 4 00:05:49.789 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.789 EAL: request: mp_malloc_sync 00:05:49.789 EAL: No shared files mode enabled, IPC is disabled 00:05:49.789 EAL: Heap on socket 0 was expanded by 34MB 00:05:49.789 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.789 EAL: request: mp_malloc_sync 00:05:49.789 EAL: No shared files mode enabled, IPC is disabled 00:05:49.789 EAL: Heap on socket 0 was shrunk by 34MB 00:05:49.789 EAL: Trying to obtain current memory policy. 00:05:49.789 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.790 EAL: Restoring previous memory policy: 4 00:05:49.790 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.790 EAL: request: mp_malloc_sync 00:05:49.790 EAL: No shared files mode enabled, IPC is disabled 00:05:49.790 EAL: Heap on socket 0 was expanded by 66MB 00:05:50.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.048 EAL: request: mp_malloc_sync 00:05:50.048 EAL: No shared files mode enabled, IPC is disabled 00:05:50.048 EAL: Heap on socket 0 was shrunk by 66MB 00:05:50.048 EAL: Trying to obtain current memory policy. 00:05:50.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.307 EAL: Restoring previous memory policy: 4 00:05:50.307 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.307 EAL: request: mp_malloc_sync 00:05:50.307 EAL: No shared files mode enabled, IPC is disabled 00:05:50.307 EAL: Heap on socket 0 was expanded by 130MB 00:05:50.566 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.566 EAL: request: mp_malloc_sync 00:05:50.566 EAL: No shared files mode enabled, IPC is disabled 00:05:50.566 EAL: Heap on socket 0 was shrunk by 130MB 00:05:50.825 EAL: Trying to obtain current memory policy. 00:05:50.825 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.825 EAL: Restoring previous memory policy: 4 00:05:50.825 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.825 EAL: request: mp_malloc_sync 00:05:50.825 EAL: No shared files mode enabled, IPC is disabled 00:05:50.825 EAL: Heap on socket 0 was expanded by 258MB 00:05:51.392 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.392 EAL: request: mp_malloc_sync 00:05:51.392 EAL: No shared files mode enabled, IPC is disabled 00:05:51.392 EAL: Heap on socket 0 was shrunk by 258MB 00:05:51.959 EAL: Trying to obtain current memory policy. 00:05:51.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.219 EAL: Restoring previous memory policy: 4 00:05:52.219 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.219 EAL: request: mp_malloc_sync 00:05:52.219 EAL: No shared files mode enabled, IPC is disabled 00:05:52.219 EAL: Heap on socket 0 was expanded by 514MB 00:05:53.154 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.414 EAL: request: mp_malloc_sync 00:05:53.414 EAL: No shared files mode enabled, IPC is disabled 00:05:53.414 EAL: Heap on socket 0 was shrunk by 514MB 00:05:54.351 EAL: Trying to obtain current memory policy. 00:05:54.351 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.611 EAL: Restoring previous memory policy: 4 00:05:54.611 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.611 EAL: request: mp_malloc_sync 00:05:54.611 EAL: No shared files mode enabled, IPC is disabled 00:05:54.611 EAL: Heap on socket 0 was expanded by 1026MB 00:05:56.523 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.090 EAL: request: mp_malloc_sync 00:05:57.090 EAL: No shared files mode enabled, IPC is disabled 00:05:57.090 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:58.997 passed 00:05:58.997 00:05:58.997 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.997 suites 1 1 n/a 0 0 00:05:58.997 tests 2 2 2 0 0 00:05:58.997 asserts 5803 5803 5803 0 n/a 00:05:58.997 00:05:58.997 Elapsed time = 9.590 seconds 00:05:58.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.997 EAL: request: mp_malloc_sync 00:05:58.997 EAL: No shared files mode enabled, IPC is disabled 00:05:58.997 EAL: Heap on socket 0 was shrunk by 2MB 00:05:58.997 EAL: No shared files mode enabled, IPC is disabled 00:05:58.997 EAL: No shared files mode enabled, IPC is disabled 00:05:58.997 EAL: No shared files mode enabled, IPC is disabled 00:05:58.997 00:05:58.997 real 0m9.938s 00:05:58.997 user 0m8.421s 00:05:58.997 sys 0m1.349s 00:05:58.997 11:21:57 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.997 11:21:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:58.997 ************************************ 00:05:58.997 END TEST env_vtophys 00:05:58.997 ************************************ 00:05:58.997 11:21:57 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:58.997 11:21:57 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.997 11:21:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.997 11:21:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.997 ************************************ 00:05:58.997 START TEST env_pci 00:05:58.997 ************************************ 00:05:58.997 11:21:57 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:58.997 00:05:58.997 00:05:58.997 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.997 http://cunit.sourceforge.net/ 00:05:58.997 00:05:58.997 00:05:58.997 Suite: pci 00:05:58.997 Test: pci_hook ...[2024-11-05 11:21:58.029342] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56902 has claimed it 00:05:58.997 passed 00:05:58.997 00:05:58.998 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.998 suites 1 1 n/a 0 0 00:05:58.998 tests 1 1 1 0 0 00:05:58.998 asserts 25 25 25 0 n/a 00:05:58.998 00:05:58.998 Elapsed time = 0.008 seconds 00:05:58.998 EAL: Cannot find device (10000:00:01.0) 00:05:58.998 EAL: Failed to attach device on primary process 00:05:58.998 00:05:58.998 real 0m0.106s 00:05:58.998 user 0m0.050s 00:05:58.998 sys 0m0.055s 00:05:58.998 11:21:58 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.998 11:21:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:58.998 ************************************ 00:05:58.998 END TEST env_pci 00:05:58.998 ************************************ 00:05:58.998 11:21:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:58.998 11:21:58 env -- env/env.sh@15 -- # uname 00:05:58.998 11:21:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:58.998 11:21:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:58.998 11:21:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:58.998 11:21:58 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:58.998 11:21:58 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.998 11:21:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.998 ************************************ 00:05:58.998 START TEST env_dpdk_post_init 00:05:58.998 ************************************ 00:05:58.998 11:21:58 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:58.998 EAL: Detected CPU lcores: 10 00:05:58.998 EAL: Detected NUMA nodes: 1 00:05:58.998 EAL: Detected shared linkage of DPDK 00:05:58.998 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:58.998 EAL: Selected IOVA mode 'PA' 00:05:59.258 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:59.258 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:59.258 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:59.258 Starting DPDK initialization... 00:05:59.258 Starting SPDK post initialization... 00:05:59.258 SPDK NVMe probe 00:05:59.258 Attaching to 0000:00:10.0 00:05:59.258 Attaching to 0000:00:11.0 00:05:59.258 Attached to 0000:00:10.0 00:05:59.258 Attached to 0000:00:11.0 00:05:59.258 Cleaning up... 00:05:59.258 00:05:59.258 real 0m0.294s 00:05:59.258 user 0m0.101s 00:05:59.258 sys 0m0.094s 00:05:59.258 11:21:58 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.258 11:21:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.258 ************************************ 00:05:59.258 END TEST env_dpdk_post_init 00:05:59.258 ************************************ 00:05:59.258 11:21:58 env -- env/env.sh@26 -- # uname 00:05:59.258 11:21:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:59.258 11:21:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:59.258 11:21:58 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:59.258 11:21:58 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:59.258 11:21:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.258 ************************************ 00:05:59.258 START TEST env_mem_callbacks 00:05:59.258 ************************************ 00:05:59.258 11:21:58 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:59.517 EAL: Detected CPU lcores: 10 00:05:59.517 EAL: Detected NUMA nodes: 1 00:05:59.517 EAL: Detected shared linkage of DPDK 00:05:59.517 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:59.517 EAL: Selected IOVA mode 'PA' 00:05:59.517 00:05:59.517 00:05:59.517 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.517 http://cunit.sourceforge.net/ 00:05:59.517 00:05:59.517 00:05:59.517 Suite: memory 00:05:59.517 Test: test ... 00:05:59.517 register 0x200000200000 2097152 00:05:59.517 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:59.517 malloc 3145728 00:05:59.517 register 0x200000400000 4194304 00:05:59.517 buf 0x2000004fffc0 len 3145728 PASSED 00:05:59.517 malloc 64 00:05:59.517 buf 0x2000004ffec0 len 64 PASSED 00:05:59.517 malloc 4194304 00:05:59.517 register 0x200000800000 6291456 00:05:59.517 buf 0x2000009fffc0 len 4194304 PASSED 00:05:59.517 free 0x2000004fffc0 3145728 00:05:59.517 free 0x2000004ffec0 64 00:05:59.517 unregister 0x200000400000 4194304 PASSED 00:05:59.517 free 0x2000009fffc0 4194304 00:05:59.517 unregister 0x200000800000 6291456 PASSED 00:05:59.517 malloc 8388608 00:05:59.517 register 0x200000400000 10485760 00:05:59.517 buf 0x2000005fffc0 len 8388608 PASSED 00:05:59.517 free 0x2000005fffc0 8388608 00:05:59.517 unregister 0x200000400000 10485760 PASSED 00:05:59.776 passed 00:05:59.776 00:05:59.776 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.776 suites 1 1 n/a 0 0 00:05:59.776 tests 1 1 1 0 0 00:05:59.776 asserts 15 15 15 0 n/a 00:05:59.776 00:05:59.776 Elapsed time = 0.087 seconds 00:05:59.776 00:05:59.776 real 0m0.295s 00:05:59.776 user 0m0.115s 00:05:59.776 sys 0m0.077s 00:05:59.776 11:21:58 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.776 11:21:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:59.776 ************************************ 00:05:59.776 END TEST env_mem_callbacks 00:05:59.776 ************************************ 00:05:59.776 ************************************ 00:05:59.776 END TEST env 00:05:59.776 ************************************ 00:05:59.776 00:05:59.776 real 0m11.538s 00:05:59.776 user 0m9.191s 00:05:59.776 sys 0m1.986s 00:05:59.776 11:21:58 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.776 11:21:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.776 11:21:58 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:59.776 11:21:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:59.776 11:21:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:59.776 11:21:58 -- common/autotest_common.sh@10 -- # set +x 00:05:59.776 ************************************ 00:05:59.776 START TEST rpc 00:05:59.776 ************************************ 00:05:59.776 11:21:58 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:59.776 * Looking for test storage... 00:06:00.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:00.035 11:21:59 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:00.035 11:21:59 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:00.035 11:21:59 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:00.035 11:21:59 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:00.035 11:21:59 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.035 11:21:59 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.035 11:21:59 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.035 11:21:59 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.035 11:21:59 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.035 11:21:59 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.035 11:21:59 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.036 11:21:59 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.036 11:21:59 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.036 11:21:59 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.036 11:21:59 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.036 11:21:59 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:00.036 11:21:59 rpc -- scripts/common.sh@345 -- # : 1 00:06:00.036 11:21:59 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.036 11:21:59 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.036 11:21:59 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:00.036 11:21:59 rpc -- scripts/common.sh@353 -- # local d=1 00:06:00.036 11:21:59 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.036 11:21:59 rpc -- scripts/common.sh@355 -- # echo 1 00:06:00.036 11:21:59 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.036 11:21:59 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:00.036 11:21:59 rpc -- scripts/common.sh@353 -- # local d=2 00:06:00.036 11:21:59 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.036 11:21:59 rpc -- scripts/common.sh@355 -- # echo 2 00:06:00.036 11:21:59 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.036 11:21:59 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.036 11:21:59 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.036 11:21:59 rpc -- scripts/common.sh@368 -- # return 0 00:06:00.036 11:21:59 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.036 11:21:59 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:00.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.036 --rc genhtml_branch_coverage=1 00:06:00.036 --rc genhtml_function_coverage=1 00:06:00.036 --rc genhtml_legend=1 00:06:00.036 --rc geninfo_all_blocks=1 00:06:00.036 --rc geninfo_unexecuted_blocks=1 00:06:00.036 00:06:00.036 ' 00:06:00.036 11:21:59 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:00.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.036 --rc genhtml_branch_coverage=1 00:06:00.036 --rc genhtml_function_coverage=1 00:06:00.036 --rc genhtml_legend=1 00:06:00.036 --rc geninfo_all_blocks=1 00:06:00.036 --rc geninfo_unexecuted_blocks=1 00:06:00.036 00:06:00.036 ' 00:06:00.036 11:21:59 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:00.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.036 --rc genhtml_branch_coverage=1 00:06:00.036 --rc genhtml_function_coverage=1 00:06:00.036 --rc genhtml_legend=1 00:06:00.036 --rc geninfo_all_blocks=1 00:06:00.036 --rc geninfo_unexecuted_blocks=1 00:06:00.036 00:06:00.036 ' 00:06:00.036 11:21:59 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:00.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.036 --rc genhtml_branch_coverage=1 00:06:00.036 --rc genhtml_function_coverage=1 00:06:00.036 --rc genhtml_legend=1 00:06:00.036 --rc geninfo_all_blocks=1 00:06:00.036 --rc geninfo_unexecuted_blocks=1 00:06:00.036 00:06:00.036 ' 00:06:00.036 11:21:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57034 00:06:00.036 11:21:59 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:00.036 11:21:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.036 11:21:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57034 00:06:00.036 11:21:59 rpc -- common/autotest_common.sh@833 -- # '[' -z 57034 ']' 00:06:00.036 11:21:59 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.036 11:21:59 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:00.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.036 11:21:59 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.036 11:21:59 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:00.036 11:21:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.036 [2024-11-05 11:21:59.277023] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:00.036 [2024-11-05 11:21:59.277215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57034 ] 00:06:00.295 [2024-11-05 11:21:59.461575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.553 [2024-11-05 11:21:59.610108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:00.553 [2024-11-05 11:21:59.610206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57034' to capture a snapshot of events at runtime. 00:06:00.553 [2024-11-05 11:21:59.610219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.553 [2024-11-05 11:21:59.610231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.553 [2024-11-05 11:21:59.610239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57034 for offline analysis/debug. 00:06:00.553 [2024-11-05 11:21:59.611786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.490 11:22:00 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:01.490 11:22:00 rpc -- common/autotest_common.sh@866 -- # return 0 00:06:01.490 11:22:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:01.490 11:22:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:01.490 11:22:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:01.490 11:22:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:01.490 11:22:00 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:01.490 11:22:00 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:01.490 11:22:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.490 ************************************ 00:06:01.490 START TEST rpc_integrity 00:06:01.490 ************************************ 00:06:01.490 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:01.490 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:01.490 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.490 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.490 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.490 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:01.490 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:01.490 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:01.490 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:01.490 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.490 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.749 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:01.749 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.749 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:01.749 { 00:06:01.749 "name": "Malloc0", 00:06:01.749 "aliases": [ 00:06:01.749 "43e42aae-0280-41af-bf52-1fd3ad1c2bb1" 00:06:01.749 ], 00:06:01.749 "product_name": "Malloc disk", 00:06:01.749 "block_size": 512, 00:06:01.749 "num_blocks": 16384, 00:06:01.749 "uuid": "43e42aae-0280-41af-bf52-1fd3ad1c2bb1", 00:06:01.749 "assigned_rate_limits": { 00:06:01.749 "rw_ios_per_sec": 0, 00:06:01.749 "rw_mbytes_per_sec": 0, 00:06:01.749 "r_mbytes_per_sec": 0, 00:06:01.749 "w_mbytes_per_sec": 0 00:06:01.749 }, 00:06:01.749 "claimed": false, 00:06:01.749 "zoned": false, 00:06:01.749 "supported_io_types": { 00:06:01.749 "read": true, 00:06:01.749 "write": true, 00:06:01.749 "unmap": true, 00:06:01.749 "flush": true, 00:06:01.749 "reset": true, 00:06:01.749 "nvme_admin": false, 00:06:01.749 "nvme_io": false, 00:06:01.749 "nvme_io_md": false, 00:06:01.749 "write_zeroes": true, 00:06:01.749 "zcopy": true, 00:06:01.749 "get_zone_info": false, 00:06:01.749 "zone_management": false, 00:06:01.749 "zone_append": false, 00:06:01.749 "compare": false, 00:06:01.749 "compare_and_write": false, 00:06:01.749 "abort": true, 00:06:01.749 "seek_hole": false, 00:06:01.749 "seek_data": false, 00:06:01.749 "copy": true, 00:06:01.749 "nvme_iov_md": false 00:06:01.749 }, 00:06:01.749 "memory_domains": [ 00:06:01.749 { 00:06:01.749 "dma_device_id": "system", 00:06:01.749 "dma_device_type": 1 00:06:01.749 }, 00:06:01.749 { 00:06:01.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.749 "dma_device_type": 2 00:06:01.749 } 00:06:01.749 ], 00:06:01.749 "driver_specific": {} 00:06:01.749 } 00:06:01.749 ]' 00:06:01.749 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:01.749 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:01.749 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.749 [2024-11-05 11:22:00.859894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:01.749 [2024-11-05 11:22:00.859976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:01.749 [2024-11-05 11:22:00.860004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:01.749 [2024-11-05 11:22:00.860022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:01.749 [2024-11-05 11:22:00.862686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:01.749 [2024-11-05 11:22:00.862730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:01.749 Passthru0 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.749 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.749 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:01.749 { 00:06:01.749 "name": "Malloc0", 00:06:01.749 "aliases": [ 00:06:01.749 "43e42aae-0280-41af-bf52-1fd3ad1c2bb1" 00:06:01.749 ], 00:06:01.749 "product_name": "Malloc disk", 00:06:01.749 "block_size": 512, 00:06:01.749 "num_blocks": 16384, 00:06:01.749 "uuid": "43e42aae-0280-41af-bf52-1fd3ad1c2bb1", 00:06:01.749 "assigned_rate_limits": { 00:06:01.749 "rw_ios_per_sec": 0, 00:06:01.749 "rw_mbytes_per_sec": 0, 00:06:01.749 "r_mbytes_per_sec": 0, 00:06:01.749 "w_mbytes_per_sec": 0 00:06:01.749 }, 00:06:01.749 "claimed": true, 00:06:01.749 "claim_type": "exclusive_write", 00:06:01.749 "zoned": false, 00:06:01.749 "supported_io_types": { 00:06:01.749 "read": true, 00:06:01.749 "write": true, 00:06:01.749 "unmap": true, 00:06:01.749 "flush": true, 00:06:01.749 "reset": true, 00:06:01.749 "nvme_admin": false, 00:06:01.749 "nvme_io": false, 00:06:01.749 "nvme_io_md": false, 00:06:01.749 "write_zeroes": true, 00:06:01.749 "zcopy": true, 00:06:01.749 "get_zone_info": false, 00:06:01.749 "zone_management": false, 00:06:01.749 "zone_append": false, 00:06:01.749 "compare": false, 00:06:01.749 "compare_and_write": false, 00:06:01.749 "abort": true, 00:06:01.749 "seek_hole": false, 00:06:01.749 "seek_data": false, 00:06:01.749 "copy": true, 00:06:01.749 "nvme_iov_md": false 00:06:01.749 }, 00:06:01.749 "memory_domains": [ 00:06:01.749 { 00:06:01.749 "dma_device_id": "system", 00:06:01.749 "dma_device_type": 1 00:06:01.749 }, 00:06:01.749 { 00:06:01.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.749 "dma_device_type": 2 00:06:01.749 } 00:06:01.749 ], 00:06:01.749 "driver_specific": {} 00:06:01.749 }, 00:06:01.749 { 00:06:01.749 "name": "Passthru0", 00:06:01.749 "aliases": [ 00:06:01.749 "d5581ab5-df2e-5530-bc81-1216f2a7ab40" 00:06:01.749 ], 00:06:01.749 "product_name": "passthru", 00:06:01.749 "block_size": 512, 00:06:01.749 "num_blocks": 16384, 00:06:01.749 "uuid": "d5581ab5-df2e-5530-bc81-1216f2a7ab40", 00:06:01.749 "assigned_rate_limits": { 00:06:01.749 "rw_ios_per_sec": 0, 00:06:01.749 "rw_mbytes_per_sec": 0, 00:06:01.749 "r_mbytes_per_sec": 0, 00:06:01.749 "w_mbytes_per_sec": 0 00:06:01.749 }, 00:06:01.749 "claimed": false, 00:06:01.749 "zoned": false, 00:06:01.749 "supported_io_types": { 00:06:01.749 "read": true, 00:06:01.749 "write": true, 00:06:01.749 "unmap": true, 00:06:01.749 "flush": true, 00:06:01.749 "reset": true, 00:06:01.749 "nvme_admin": false, 00:06:01.749 "nvme_io": false, 00:06:01.749 "nvme_io_md": false, 00:06:01.749 "write_zeroes": true, 00:06:01.749 "zcopy": true, 00:06:01.749 "get_zone_info": false, 00:06:01.749 "zone_management": false, 00:06:01.749 "zone_append": false, 00:06:01.749 "compare": false, 00:06:01.749 "compare_and_write": false, 00:06:01.749 "abort": true, 00:06:01.749 "seek_hole": false, 00:06:01.749 "seek_data": false, 00:06:01.749 "copy": true, 00:06:01.749 "nvme_iov_md": false 00:06:01.749 }, 00:06:01.749 "memory_domains": [ 00:06:01.749 { 00:06:01.749 "dma_device_id": "system", 00:06:01.749 "dma_device_type": 1 00:06:01.749 }, 00:06:01.749 { 00:06:01.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.749 "dma_device_type": 2 00:06:01.749 } 00:06:01.749 ], 00:06:01.749 "driver_specific": { 00:06:01.749 "passthru": { 00:06:01.749 "name": "Passthru0", 00:06:01.749 "base_bdev_name": "Malloc0" 00:06:01.749 } 00:06:01.749 } 00:06:01.749 } 00:06:01.749 ]' 00:06:01.749 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:01.749 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:01.749 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.749 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.749 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.749 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.749 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.749 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:01.749 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:02.008 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:02.008 00:06:02.008 real 0m0.362s 00:06:02.008 user 0m0.192s 00:06:02.008 sys 0m0.055s 00:06:02.008 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.008 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.008 ************************************ 00:06:02.008 END TEST rpc_integrity 00:06:02.008 ************************************ 00:06:02.008 11:22:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:02.008 11:22:01 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:02.008 11:22:01 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.008 11:22:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.008 ************************************ 00:06:02.008 START TEST rpc_plugins 00:06:02.008 ************************************ 00:06:02.008 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:06:02.008 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:02.008 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.008 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.008 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.008 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:02.008 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:02.008 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.008 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.008 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.008 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:02.008 { 00:06:02.008 "name": "Malloc1", 00:06:02.008 "aliases": [ 00:06:02.008 "3007159a-eaf5-4377-92aa-594e1b0829a1" 00:06:02.008 ], 00:06:02.008 "product_name": "Malloc disk", 00:06:02.008 "block_size": 4096, 00:06:02.008 "num_blocks": 256, 00:06:02.008 "uuid": "3007159a-eaf5-4377-92aa-594e1b0829a1", 00:06:02.008 "assigned_rate_limits": { 00:06:02.008 "rw_ios_per_sec": 0, 00:06:02.008 "rw_mbytes_per_sec": 0, 00:06:02.008 "r_mbytes_per_sec": 0, 00:06:02.008 "w_mbytes_per_sec": 0 00:06:02.008 }, 00:06:02.008 "claimed": false, 00:06:02.008 "zoned": false, 00:06:02.008 "supported_io_types": { 00:06:02.008 "read": true, 00:06:02.008 "write": true, 00:06:02.008 "unmap": true, 00:06:02.008 "flush": true, 00:06:02.008 "reset": true, 00:06:02.008 "nvme_admin": false, 00:06:02.008 "nvme_io": false, 00:06:02.008 "nvme_io_md": false, 00:06:02.008 "write_zeroes": true, 00:06:02.008 "zcopy": true, 00:06:02.008 "get_zone_info": false, 00:06:02.008 "zone_management": false, 00:06:02.008 "zone_append": false, 00:06:02.008 "compare": false, 00:06:02.008 "compare_and_write": false, 00:06:02.008 "abort": true, 00:06:02.008 "seek_hole": false, 00:06:02.008 "seek_data": false, 00:06:02.008 "copy": true, 00:06:02.008 "nvme_iov_md": false 00:06:02.008 }, 00:06:02.008 "memory_domains": [ 00:06:02.008 { 00:06:02.008 "dma_device_id": "system", 00:06:02.008 "dma_device_type": 1 00:06:02.008 }, 00:06:02.008 { 00:06:02.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.008 "dma_device_type": 2 00:06:02.008 } 00:06:02.008 ], 00:06:02.008 "driver_specific": {} 00:06:02.008 } 00:06:02.008 ]' 00:06:02.008 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:02.008 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:02.008 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:02.008 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.008 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.008 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.008 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:02.008 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.008 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.008 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.008 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:02.008 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:02.008 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:02.008 00:06:02.008 real 0m0.159s 00:06:02.008 user 0m0.085s 00:06:02.008 sys 0m0.028s 00:06:02.008 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.008 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.008 ************************************ 00:06:02.008 END TEST rpc_plugins 00:06:02.008 ************************************ 00:06:02.267 11:22:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:02.267 11:22:01 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:02.267 11:22:01 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.267 11:22:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.267 ************************************ 00:06:02.267 START TEST rpc_trace_cmd_test 00:06:02.267 ************************************ 00:06:02.267 11:22:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:06:02.267 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:02.268 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:02.268 11:22:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.268 11:22:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.268 11:22:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.268 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:02.268 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57034", 00:06:02.268 "tpoint_group_mask": "0x8", 00:06:02.268 "iscsi_conn": { 00:06:02.268 "mask": "0x2", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 }, 00:06:02.268 "scsi": { 00:06:02.268 "mask": "0x4", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 }, 00:06:02.268 "bdev": { 00:06:02.268 "mask": "0x8", 00:06:02.268 "tpoint_mask": "0xffffffffffffffff" 00:06:02.268 }, 00:06:02.268 "nvmf_rdma": { 00:06:02.268 "mask": "0x10", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 }, 00:06:02.268 "nvmf_tcp": { 00:06:02.268 "mask": "0x20", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 }, 00:06:02.268 "ftl": { 00:06:02.268 "mask": "0x40", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 }, 00:06:02.268 "blobfs": { 00:06:02.268 "mask": "0x80", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 }, 00:06:02.268 "dsa": { 00:06:02.268 "mask": "0x200", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 }, 00:06:02.268 "thread": { 00:06:02.268 "mask": "0x400", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 }, 00:06:02.268 "nvme_pcie": { 00:06:02.268 "mask": "0x800", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 }, 00:06:02.268 "iaa": { 00:06:02.268 "mask": "0x1000", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 }, 00:06:02.268 "nvme_tcp": { 00:06:02.268 "mask": "0x2000", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 }, 00:06:02.268 "bdev_nvme": { 00:06:02.268 "mask": "0x4000", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 }, 00:06:02.268 "sock": { 00:06:02.268 "mask": "0x8000", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 }, 00:06:02.268 "blob": { 00:06:02.268 "mask": "0x10000", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 }, 00:06:02.268 "bdev_raid": { 00:06:02.268 "mask": "0x20000", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 }, 00:06:02.268 "scheduler": { 00:06:02.268 "mask": "0x40000", 00:06:02.268 "tpoint_mask": "0x0" 00:06:02.268 } 00:06:02.268 }' 00:06:02.268 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:02.268 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:02.268 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:02.268 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:02.268 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:02.268 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:02.268 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:02.268 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:02.268 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:02.527 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:02.527 00:06:02.527 real 0m0.242s 00:06:02.527 user 0m0.190s 00:06:02.527 sys 0m0.041s 00:06:02.527 11:22:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.527 11:22:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.527 ************************************ 00:06:02.527 END TEST rpc_trace_cmd_test 00:06:02.527 ************************************ 00:06:02.527 11:22:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:02.527 11:22:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:02.527 11:22:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:02.527 11:22:01 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:02.527 11:22:01 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.527 11:22:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.527 ************************************ 00:06:02.527 START TEST rpc_daemon_integrity 00:06:02.527 ************************************ 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:02.527 { 00:06:02.527 "name": "Malloc2", 00:06:02.527 "aliases": [ 00:06:02.527 "47e32a50-1c88-4b77-af9a-d6a46aaacb13" 00:06:02.527 ], 00:06:02.527 "product_name": "Malloc disk", 00:06:02.527 "block_size": 512, 00:06:02.527 "num_blocks": 16384, 00:06:02.527 "uuid": "47e32a50-1c88-4b77-af9a-d6a46aaacb13", 00:06:02.527 "assigned_rate_limits": { 00:06:02.527 "rw_ios_per_sec": 0, 00:06:02.527 "rw_mbytes_per_sec": 0, 00:06:02.527 "r_mbytes_per_sec": 0, 00:06:02.527 "w_mbytes_per_sec": 0 00:06:02.527 }, 00:06:02.527 "claimed": false, 00:06:02.527 "zoned": false, 00:06:02.527 "supported_io_types": { 00:06:02.527 "read": true, 00:06:02.527 "write": true, 00:06:02.527 "unmap": true, 00:06:02.527 "flush": true, 00:06:02.527 "reset": true, 00:06:02.527 "nvme_admin": false, 00:06:02.527 "nvme_io": false, 00:06:02.527 "nvme_io_md": false, 00:06:02.527 "write_zeroes": true, 00:06:02.527 "zcopy": true, 00:06:02.527 "get_zone_info": false, 00:06:02.527 "zone_management": false, 00:06:02.527 "zone_append": false, 00:06:02.527 "compare": false, 00:06:02.527 "compare_and_write": false, 00:06:02.527 "abort": true, 00:06:02.527 "seek_hole": false, 00:06:02.527 "seek_data": false, 00:06:02.527 "copy": true, 00:06:02.527 "nvme_iov_md": false 00:06:02.527 }, 00:06:02.527 "memory_domains": [ 00:06:02.527 { 00:06:02.527 "dma_device_id": "system", 00:06:02.527 "dma_device_type": 1 00:06:02.527 }, 00:06:02.527 { 00:06:02.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.527 "dma_device_type": 2 00:06:02.527 } 00:06:02.527 ], 00:06:02.527 "driver_specific": {} 00:06:02.527 } 00:06:02.527 ]' 00:06:02.527 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:02.786 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:02.786 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:02.786 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.786 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.786 [2024-11-05 11:22:01.824360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:02.786 [2024-11-05 11:22:01.824463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:02.786 [2024-11-05 11:22:01.824494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:02.786 [2024-11-05 11:22:01.824508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:02.786 [2024-11-05 11:22:01.827224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:02.786 [2024-11-05 11:22:01.827272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:02.786 Passthru0 00:06:02.786 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.786 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:02.786 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.786 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.786 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.786 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:02.786 { 00:06:02.786 "name": "Malloc2", 00:06:02.786 "aliases": [ 00:06:02.786 "47e32a50-1c88-4b77-af9a-d6a46aaacb13" 00:06:02.786 ], 00:06:02.786 "product_name": "Malloc disk", 00:06:02.786 "block_size": 512, 00:06:02.786 "num_blocks": 16384, 00:06:02.786 "uuid": "47e32a50-1c88-4b77-af9a-d6a46aaacb13", 00:06:02.786 "assigned_rate_limits": { 00:06:02.786 "rw_ios_per_sec": 0, 00:06:02.786 "rw_mbytes_per_sec": 0, 00:06:02.786 "r_mbytes_per_sec": 0, 00:06:02.786 "w_mbytes_per_sec": 0 00:06:02.786 }, 00:06:02.786 "claimed": true, 00:06:02.786 "claim_type": "exclusive_write", 00:06:02.786 "zoned": false, 00:06:02.786 "supported_io_types": { 00:06:02.786 "read": true, 00:06:02.786 "write": true, 00:06:02.786 "unmap": true, 00:06:02.786 "flush": true, 00:06:02.786 "reset": true, 00:06:02.786 "nvme_admin": false, 00:06:02.786 "nvme_io": false, 00:06:02.786 "nvme_io_md": false, 00:06:02.787 "write_zeroes": true, 00:06:02.787 "zcopy": true, 00:06:02.787 "get_zone_info": false, 00:06:02.787 "zone_management": false, 00:06:02.787 "zone_append": false, 00:06:02.787 "compare": false, 00:06:02.787 "compare_and_write": false, 00:06:02.787 "abort": true, 00:06:02.787 "seek_hole": false, 00:06:02.787 "seek_data": false, 00:06:02.787 "copy": true, 00:06:02.787 "nvme_iov_md": false 00:06:02.787 }, 00:06:02.787 "memory_domains": [ 00:06:02.787 { 00:06:02.787 "dma_device_id": "system", 00:06:02.787 "dma_device_type": 1 00:06:02.787 }, 00:06:02.787 { 00:06:02.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.787 "dma_device_type": 2 00:06:02.787 } 00:06:02.787 ], 00:06:02.787 "driver_specific": {} 00:06:02.787 }, 00:06:02.787 { 00:06:02.787 "name": "Passthru0", 00:06:02.787 "aliases": [ 00:06:02.787 "a12e0ce5-de5e-5049-a438-9a32b9e8d78f" 00:06:02.787 ], 00:06:02.787 "product_name": "passthru", 00:06:02.787 "block_size": 512, 00:06:02.787 "num_blocks": 16384, 00:06:02.787 "uuid": "a12e0ce5-de5e-5049-a438-9a32b9e8d78f", 00:06:02.787 "assigned_rate_limits": { 00:06:02.787 "rw_ios_per_sec": 0, 00:06:02.787 "rw_mbytes_per_sec": 0, 00:06:02.787 "r_mbytes_per_sec": 0, 00:06:02.787 "w_mbytes_per_sec": 0 00:06:02.787 }, 00:06:02.787 "claimed": false, 00:06:02.787 "zoned": false, 00:06:02.787 "supported_io_types": { 00:06:02.787 "read": true, 00:06:02.787 "write": true, 00:06:02.787 "unmap": true, 00:06:02.787 "flush": true, 00:06:02.787 "reset": true, 00:06:02.787 "nvme_admin": false, 00:06:02.787 "nvme_io": false, 00:06:02.787 "nvme_io_md": false, 00:06:02.787 "write_zeroes": true, 00:06:02.787 "zcopy": true, 00:06:02.787 "get_zone_info": false, 00:06:02.787 "zone_management": false, 00:06:02.787 "zone_append": false, 00:06:02.787 "compare": false, 00:06:02.787 "compare_and_write": false, 00:06:02.787 "abort": true, 00:06:02.787 "seek_hole": false, 00:06:02.787 "seek_data": false, 00:06:02.787 "copy": true, 00:06:02.787 "nvme_iov_md": false 00:06:02.787 }, 00:06:02.787 "memory_domains": [ 00:06:02.787 { 00:06:02.787 "dma_device_id": "system", 00:06:02.787 "dma_device_type": 1 00:06:02.787 }, 00:06:02.787 { 00:06:02.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.787 "dma_device_type": 2 00:06:02.787 } 00:06:02.787 ], 00:06:02.787 "driver_specific": { 00:06:02.787 "passthru": { 00:06:02.787 "name": "Passthru0", 00:06:02.787 "base_bdev_name": "Malloc2" 00:06:02.787 } 00:06:02.787 } 00:06:02.787 } 00:06:02.787 ]' 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:02.787 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:02.787 ************************************ 00:06:02.787 END TEST rpc_daemon_integrity 00:06:02.787 ************************************ 00:06:02.787 11:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:02.787 00:06:02.787 real 0m0.369s 00:06:02.787 user 0m0.195s 00:06:02.787 sys 0m0.057s 00:06:02.787 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.787 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.045 11:22:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:03.045 11:22:02 rpc -- rpc/rpc.sh@84 -- # killprocess 57034 00:06:03.045 11:22:02 rpc -- common/autotest_common.sh@952 -- # '[' -z 57034 ']' 00:06:03.045 11:22:02 rpc -- common/autotest_common.sh@956 -- # kill -0 57034 00:06:03.045 11:22:02 rpc -- common/autotest_common.sh@957 -- # uname 00:06:03.045 11:22:02 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:03.045 11:22:02 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57034 00:06:03.045 killing process with pid 57034 00:06:03.045 11:22:02 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:03.045 11:22:02 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:03.045 11:22:02 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57034' 00:06:03.045 11:22:02 rpc -- common/autotest_common.sh@971 -- # kill 57034 00:06:03.045 11:22:02 rpc -- common/autotest_common.sh@976 -- # wait 57034 00:06:05.595 00:06:05.595 real 0m5.886s 00:06:05.595 user 0m6.257s 00:06:05.595 sys 0m1.118s 00:06:05.595 ************************************ 00:06:05.595 END TEST rpc 00:06:05.595 ************************************ 00:06:05.595 11:22:04 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:05.595 11:22:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.854 11:22:04 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:05.854 11:22:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:05.854 11:22:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:05.854 11:22:04 -- common/autotest_common.sh@10 -- # set +x 00:06:05.854 ************************************ 00:06:05.854 START TEST skip_rpc 00:06:05.854 ************************************ 00:06:05.854 11:22:04 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:05.854 * Looking for test storage... 00:06:05.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:05.854 11:22:05 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:05.854 11:22:05 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:05.854 11:22:05 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:05.854 11:22:05 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:05.854 11:22:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.855 11:22:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:05.855 11:22:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.855 11:22:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.855 11:22:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.855 11:22:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:05.855 11:22:05 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.855 11:22:05 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:05.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.855 --rc genhtml_branch_coverage=1 00:06:05.855 --rc genhtml_function_coverage=1 00:06:05.855 --rc genhtml_legend=1 00:06:05.855 --rc geninfo_all_blocks=1 00:06:05.855 --rc geninfo_unexecuted_blocks=1 00:06:05.855 00:06:05.855 ' 00:06:05.855 11:22:05 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:05.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.855 --rc genhtml_branch_coverage=1 00:06:05.855 --rc genhtml_function_coverage=1 00:06:05.855 --rc genhtml_legend=1 00:06:05.855 --rc geninfo_all_blocks=1 00:06:05.855 --rc geninfo_unexecuted_blocks=1 00:06:05.855 00:06:05.855 ' 00:06:05.855 11:22:05 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:05.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.855 --rc genhtml_branch_coverage=1 00:06:05.855 --rc genhtml_function_coverage=1 00:06:05.855 --rc genhtml_legend=1 00:06:05.855 --rc geninfo_all_blocks=1 00:06:05.855 --rc geninfo_unexecuted_blocks=1 00:06:05.855 00:06:05.855 ' 00:06:05.855 11:22:05 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:05.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.855 --rc genhtml_branch_coverage=1 00:06:05.855 --rc genhtml_function_coverage=1 00:06:05.855 --rc genhtml_legend=1 00:06:05.855 --rc geninfo_all_blocks=1 00:06:05.855 --rc geninfo_unexecuted_blocks=1 00:06:05.855 00:06:05.855 ' 00:06:05.855 11:22:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:05.855 11:22:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:05.855 11:22:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:05.855 11:22:05 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:05.855 11:22:05 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:05.855 11:22:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.114 ************************************ 00:06:06.114 START TEST skip_rpc 00:06:06.114 ************************************ 00:06:06.114 11:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:06:06.114 11:22:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57269 00:06:06.114 11:22:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:06.114 11:22:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.114 11:22:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:06.114 [2024-11-05 11:22:05.254526] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:06.114 [2024-11-05 11:22:05.254772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57269 ] 00:06:06.373 [2024-11-05 11:22:05.435700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.373 [2024-11-05 11:22:05.563319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57269 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57269 ']' 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57269 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57269 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57269' 00:06:11.648 killing process with pid 57269 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57269 00:06:11.648 11:22:10 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57269 00:06:13.562 00:06:13.562 real 0m7.532s 00:06:13.562 user 0m7.078s 00:06:13.562 sys 0m0.380s 00:06:13.562 11:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:13.562 11:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.562 ************************************ 00:06:13.562 END TEST skip_rpc 00:06:13.562 ************************************ 00:06:13.562 11:22:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:13.562 11:22:12 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:13.562 11:22:12 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:13.562 11:22:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.562 ************************************ 00:06:13.562 START TEST skip_rpc_with_json 00:06:13.562 ************************************ 00:06:13.562 11:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:06:13.562 11:22:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:13.562 11:22:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57383 00:06:13.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.562 11:22:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.562 11:22:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57383 00:06:13.562 11:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57383 ']' 00:06:13.562 11:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.562 11:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:13.562 11:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.562 11:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:13.562 11:22:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.562 11:22:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.562 [2024-11-05 11:22:12.827452] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:13.562 [2024-11-05 11:22:12.827571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57383 ] 00:06:13.825 [2024-11-05 11:22:13.001663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.083 [2024-11-05 11:22:13.125563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.029 11:22:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:15.029 11:22:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:06:15.029 11:22:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:15.029 11:22:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.029 11:22:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.029 [2024-11-05 11:22:13.996608] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:15.029 request: 00:06:15.029 { 00:06:15.029 "trtype": "tcp", 00:06:15.029 "method": "nvmf_get_transports", 00:06:15.029 "req_id": 1 00:06:15.029 } 00:06:15.029 Got JSON-RPC error response 00:06:15.029 response: 00:06:15.029 { 00:06:15.029 "code": -19, 00:06:15.029 "message": "No such device" 00:06:15.029 } 00:06:15.029 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:15.029 11:22:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:15.029 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.029 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.029 [2024-11-05 11:22:14.008713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.029 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.029 11:22:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:15.029 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.029 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.029 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.029 11:22:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:15.029 { 00:06:15.029 "subsystems": [ 00:06:15.029 { 00:06:15.029 "subsystem": "fsdev", 00:06:15.029 "config": [ 00:06:15.029 { 00:06:15.029 "method": "fsdev_set_opts", 00:06:15.029 "params": { 00:06:15.029 "fsdev_io_pool_size": 65535, 00:06:15.029 "fsdev_io_cache_size": 256 00:06:15.029 } 00:06:15.029 } 00:06:15.029 ] 00:06:15.029 }, 00:06:15.029 { 00:06:15.029 "subsystem": "keyring", 00:06:15.029 "config": [] 00:06:15.029 }, 00:06:15.029 { 00:06:15.029 "subsystem": "iobuf", 00:06:15.029 "config": [ 00:06:15.029 { 00:06:15.029 "method": "iobuf_set_options", 00:06:15.029 "params": { 00:06:15.029 "small_pool_count": 8192, 00:06:15.029 "large_pool_count": 1024, 00:06:15.029 "small_bufsize": 8192, 00:06:15.029 "large_bufsize": 135168, 00:06:15.029 "enable_numa": false 00:06:15.029 } 00:06:15.029 } 00:06:15.029 ] 00:06:15.029 }, 00:06:15.029 { 00:06:15.029 "subsystem": "sock", 00:06:15.029 "config": [ 00:06:15.029 { 00:06:15.029 "method": "sock_set_default_impl", 00:06:15.029 "params": { 00:06:15.029 "impl_name": "posix" 00:06:15.029 } 00:06:15.029 }, 00:06:15.029 { 00:06:15.029 "method": "sock_impl_set_options", 00:06:15.029 "params": { 00:06:15.029 "impl_name": "ssl", 00:06:15.029 "recv_buf_size": 4096, 00:06:15.029 "send_buf_size": 4096, 00:06:15.029 "enable_recv_pipe": true, 00:06:15.029 "enable_quickack": false, 00:06:15.029 "enable_placement_id": 0, 00:06:15.029 "enable_zerocopy_send_server": true, 00:06:15.029 "enable_zerocopy_send_client": false, 00:06:15.029 "zerocopy_threshold": 0, 00:06:15.029 "tls_version": 0, 00:06:15.029 "enable_ktls": false 00:06:15.029 } 00:06:15.029 }, 00:06:15.029 { 00:06:15.029 "method": "sock_impl_set_options", 00:06:15.029 "params": { 00:06:15.029 "impl_name": "posix", 00:06:15.029 "recv_buf_size": 2097152, 00:06:15.029 "send_buf_size": 2097152, 00:06:15.029 "enable_recv_pipe": true, 00:06:15.030 "enable_quickack": false, 00:06:15.030 "enable_placement_id": 0, 00:06:15.030 "enable_zerocopy_send_server": true, 00:06:15.030 "enable_zerocopy_send_client": false, 00:06:15.030 "zerocopy_threshold": 0, 00:06:15.030 "tls_version": 0, 00:06:15.030 "enable_ktls": false 00:06:15.030 } 00:06:15.030 } 00:06:15.030 ] 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "subsystem": "vmd", 00:06:15.030 "config": [] 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "subsystem": "accel", 00:06:15.030 "config": [ 00:06:15.030 { 00:06:15.030 "method": "accel_set_options", 00:06:15.030 "params": { 00:06:15.030 "small_cache_size": 128, 00:06:15.030 "large_cache_size": 16, 00:06:15.030 "task_count": 2048, 00:06:15.030 "sequence_count": 2048, 00:06:15.030 "buf_count": 2048 00:06:15.030 } 00:06:15.030 } 00:06:15.030 ] 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "subsystem": "bdev", 00:06:15.030 "config": [ 00:06:15.030 { 00:06:15.030 "method": "bdev_set_options", 00:06:15.030 "params": { 00:06:15.030 "bdev_io_pool_size": 65535, 00:06:15.030 "bdev_io_cache_size": 256, 00:06:15.030 "bdev_auto_examine": true, 00:06:15.030 "iobuf_small_cache_size": 128, 00:06:15.030 "iobuf_large_cache_size": 16 00:06:15.030 } 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "method": "bdev_raid_set_options", 00:06:15.030 "params": { 00:06:15.030 "process_window_size_kb": 1024, 00:06:15.030 "process_max_bandwidth_mb_sec": 0 00:06:15.030 } 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "method": "bdev_iscsi_set_options", 00:06:15.030 "params": { 00:06:15.030 "timeout_sec": 30 00:06:15.030 } 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "method": "bdev_nvme_set_options", 00:06:15.030 "params": { 00:06:15.030 "action_on_timeout": "none", 00:06:15.030 "timeout_us": 0, 00:06:15.030 "timeout_admin_us": 0, 00:06:15.030 "keep_alive_timeout_ms": 10000, 00:06:15.030 "arbitration_burst": 0, 00:06:15.030 "low_priority_weight": 0, 00:06:15.030 "medium_priority_weight": 0, 00:06:15.030 "high_priority_weight": 0, 00:06:15.030 "nvme_adminq_poll_period_us": 10000, 00:06:15.030 "nvme_ioq_poll_period_us": 0, 00:06:15.030 "io_queue_requests": 0, 00:06:15.030 "delay_cmd_submit": true, 00:06:15.030 "transport_retry_count": 4, 00:06:15.030 "bdev_retry_count": 3, 00:06:15.030 "transport_ack_timeout": 0, 00:06:15.030 "ctrlr_loss_timeout_sec": 0, 00:06:15.030 "reconnect_delay_sec": 0, 00:06:15.030 "fast_io_fail_timeout_sec": 0, 00:06:15.030 "disable_auto_failback": false, 00:06:15.030 "generate_uuids": false, 00:06:15.030 "transport_tos": 0, 00:06:15.030 "nvme_error_stat": false, 00:06:15.030 "rdma_srq_size": 0, 00:06:15.030 "io_path_stat": false, 00:06:15.030 "allow_accel_sequence": false, 00:06:15.030 "rdma_max_cq_size": 0, 00:06:15.030 "rdma_cm_event_timeout_ms": 0, 00:06:15.030 "dhchap_digests": [ 00:06:15.030 "sha256", 00:06:15.030 "sha384", 00:06:15.030 "sha512" 00:06:15.030 ], 00:06:15.030 "dhchap_dhgroups": [ 00:06:15.030 "null", 00:06:15.030 "ffdhe2048", 00:06:15.030 "ffdhe3072", 00:06:15.030 "ffdhe4096", 00:06:15.030 "ffdhe6144", 00:06:15.030 "ffdhe8192" 00:06:15.030 ] 00:06:15.030 } 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "method": "bdev_nvme_set_hotplug", 00:06:15.030 "params": { 00:06:15.030 "period_us": 100000, 00:06:15.030 "enable": false 00:06:15.030 } 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "method": "bdev_wait_for_examine" 00:06:15.030 } 00:06:15.030 ] 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "subsystem": "scsi", 00:06:15.030 "config": null 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "subsystem": "scheduler", 00:06:15.030 "config": [ 00:06:15.030 { 00:06:15.030 "method": "framework_set_scheduler", 00:06:15.030 "params": { 00:06:15.030 "name": "static" 00:06:15.030 } 00:06:15.030 } 00:06:15.030 ] 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "subsystem": "vhost_scsi", 00:06:15.030 "config": [] 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "subsystem": "vhost_blk", 00:06:15.030 "config": [] 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "subsystem": "ublk", 00:06:15.030 "config": [] 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "subsystem": "nbd", 00:06:15.030 "config": [] 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "subsystem": "nvmf", 00:06:15.030 "config": [ 00:06:15.030 { 00:06:15.030 "method": "nvmf_set_config", 00:06:15.030 "params": { 00:06:15.030 "discovery_filter": "match_any", 00:06:15.030 "admin_cmd_passthru": { 00:06:15.030 "identify_ctrlr": false 00:06:15.030 }, 00:06:15.030 "dhchap_digests": [ 00:06:15.030 "sha256", 00:06:15.030 "sha384", 00:06:15.030 "sha512" 00:06:15.030 ], 00:06:15.030 "dhchap_dhgroups": [ 00:06:15.030 "null", 00:06:15.030 "ffdhe2048", 00:06:15.030 "ffdhe3072", 00:06:15.030 "ffdhe4096", 00:06:15.030 "ffdhe6144", 00:06:15.030 "ffdhe8192" 00:06:15.030 ] 00:06:15.030 } 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "method": "nvmf_set_max_subsystems", 00:06:15.030 "params": { 00:06:15.030 "max_subsystems": 1024 00:06:15.030 } 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "method": "nvmf_set_crdt", 00:06:15.030 "params": { 00:06:15.030 "crdt1": 0, 00:06:15.030 "crdt2": 0, 00:06:15.030 "crdt3": 0 00:06:15.030 } 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "method": "nvmf_create_transport", 00:06:15.030 "params": { 00:06:15.030 "trtype": "TCP", 00:06:15.030 "max_queue_depth": 128, 00:06:15.030 "max_io_qpairs_per_ctrlr": 127, 00:06:15.030 "in_capsule_data_size": 4096, 00:06:15.030 "max_io_size": 131072, 00:06:15.030 "io_unit_size": 131072, 00:06:15.030 "max_aq_depth": 128, 00:06:15.030 "num_shared_buffers": 511, 00:06:15.030 "buf_cache_size": 4294967295, 00:06:15.030 "dif_insert_or_strip": false, 00:06:15.030 "zcopy": false, 00:06:15.030 "c2h_success": true, 00:06:15.030 "sock_priority": 0, 00:06:15.030 "abort_timeout_sec": 1, 00:06:15.030 "ack_timeout": 0, 00:06:15.030 "data_wr_pool_size": 0 00:06:15.030 } 00:06:15.030 } 00:06:15.030 ] 00:06:15.030 }, 00:06:15.030 { 00:06:15.030 "subsystem": "iscsi", 00:06:15.030 "config": [ 00:06:15.030 { 00:06:15.030 "method": "iscsi_set_options", 00:06:15.030 "params": { 00:06:15.030 "node_base": "iqn.2016-06.io.spdk", 00:06:15.030 "max_sessions": 128, 00:06:15.030 "max_connections_per_session": 2, 00:06:15.030 "max_queue_depth": 64, 00:06:15.030 "default_time2wait": 2, 00:06:15.030 "default_time2retain": 20, 00:06:15.030 "first_burst_length": 8192, 00:06:15.030 "immediate_data": true, 00:06:15.030 "allow_duplicated_isid": false, 00:06:15.030 "error_recovery_level": 0, 00:06:15.030 "nop_timeout": 60, 00:06:15.030 "nop_in_interval": 30, 00:06:15.030 "disable_chap": false, 00:06:15.030 "require_chap": false, 00:06:15.030 "mutual_chap": false, 00:06:15.030 "chap_group": 0, 00:06:15.030 "max_large_datain_per_connection": 64, 00:06:15.030 "max_r2t_per_connection": 4, 00:06:15.030 "pdu_pool_size": 36864, 00:06:15.030 "immediate_data_pool_size": 16384, 00:06:15.030 "data_out_pool_size": 2048 00:06:15.030 } 00:06:15.030 } 00:06:15.030 ] 00:06:15.030 } 00:06:15.030 ] 00:06:15.030 } 00:06:15.030 11:22:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:15.030 11:22:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57383 00:06:15.030 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57383 ']' 00:06:15.030 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57383 00:06:15.030 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:15.030 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:15.030 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57383 00:06:15.030 killing process with pid 57383 00:06:15.030 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:15.030 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:15.030 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57383' 00:06:15.030 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57383 00:06:15.030 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57383 00:06:17.565 11:22:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57429 00:06:17.565 11:22:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:17.565 11:22:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:22.848 11:22:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57429 00:06:22.848 11:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57429 ']' 00:06:22.848 11:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57429 00:06:22.848 11:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:22.848 11:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:22.848 11:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57429 00:06:22.848 killing process with pid 57429 00:06:22.848 11:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:22.848 11:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:22.848 11:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57429' 00:06:22.848 11:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57429 00:06:22.848 11:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57429 00:06:24.750 11:22:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:24.750 11:22:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:25.010 00:06:25.010 real 0m11.300s 00:06:25.010 user 0m10.785s 00:06:25.010 sys 0m0.835s 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:25.010 ************************************ 00:06:25.010 END TEST skip_rpc_with_json 00:06:25.010 ************************************ 00:06:25.010 11:22:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:25.010 11:22:24 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:25.010 11:22:24 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.010 11:22:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.010 ************************************ 00:06:25.010 START TEST skip_rpc_with_delay 00:06:25.010 ************************************ 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:25.010 [2024-11-05 11:22:24.202368] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:25.010 ************************************ 00:06:25.010 END TEST skip_rpc_with_delay 00:06:25.010 ************************************ 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.010 00:06:25.010 real 0m0.169s 00:06:25.010 user 0m0.095s 00:06:25.010 sys 0m0.072s 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.010 11:22:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:25.270 11:22:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:25.270 11:22:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:25.270 11:22:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:25.270 11:22:24 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:25.270 11:22:24 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.270 11:22:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.270 ************************************ 00:06:25.270 START TEST exit_on_failed_rpc_init 00:06:25.270 ************************************ 00:06:25.270 11:22:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:06:25.270 11:22:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.270 11:22:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57568 00:06:25.270 11:22:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57568 00:06:25.270 11:22:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57568 ']' 00:06:25.270 11:22:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.270 11:22:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:25.270 11:22:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.270 11:22:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:25.270 11:22:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:25.270 [2024-11-05 11:22:24.431183] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:25.270 [2024-11-05 11:22:24.431401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57568 ] 00:06:25.529 [2024-11-05 11:22:24.605530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.529 [2024-11-05 11:22:24.721177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.466 11:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:26.466 11:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:06:26.466 11:22:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.466 11:22:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.466 11:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:26.466 11:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.466 11:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.466 11:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.466 11:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.466 11:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.466 11:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.466 11:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.466 11:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.466 11:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:26.466 11:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.466 [2024-11-05 11:22:25.703801] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:26.466 [2024-11-05 11:22:25.703916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57586 ] 00:06:26.725 [2024-11-05 11:22:25.879408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.725 [2024-11-05 11:22:25.996349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.725 [2024-11-05 11:22:25.996441] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:26.725 [2024-11-05 11:22:25.996454] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:26.725 [2024-11-05 11:22:25.996465] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.985 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:26.985 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.985 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:26.985 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:26.985 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:26.985 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.985 11:22:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:26.985 11:22:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57568 00:06:26.985 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57568 ']' 00:06:26.985 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57568 00:06:26.985 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:06:27.244 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:27.244 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57568 00:06:27.244 killing process with pid 57568 00:06:27.244 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:27.244 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:27.244 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57568' 00:06:27.244 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57568 00:06:27.244 11:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57568 00:06:29.780 00:06:29.780 real 0m4.336s 00:06:29.780 user 0m4.672s 00:06:29.780 sys 0m0.559s 00:06:29.780 ************************************ 00:06:29.780 END TEST exit_on_failed_rpc_init 00:06:29.780 ************************************ 00:06:29.780 11:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.780 11:22:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:29.780 11:22:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:29.780 ************************************ 00:06:29.780 END TEST skip_rpc 00:06:29.780 ************************************ 00:06:29.780 00:06:29.780 real 0m23.827s 00:06:29.780 user 0m22.840s 00:06:29.780 sys 0m2.137s 00:06:29.780 11:22:28 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.780 11:22:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.780 11:22:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:29.780 11:22:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:29.780 11:22:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.780 11:22:28 -- common/autotest_common.sh@10 -- # set +x 00:06:29.780 ************************************ 00:06:29.780 START TEST rpc_client 00:06:29.780 ************************************ 00:06:29.780 11:22:28 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:29.780 * Looking for test storage... 00:06:29.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:29.780 11:22:28 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:29.781 11:22:28 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:29.781 11:22:28 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:29.781 11:22:28 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.781 11:22:28 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:29.781 11:22:28 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.781 11:22:28 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:29.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.781 --rc genhtml_branch_coverage=1 00:06:29.781 --rc genhtml_function_coverage=1 00:06:29.781 --rc genhtml_legend=1 00:06:29.781 --rc geninfo_all_blocks=1 00:06:29.781 --rc geninfo_unexecuted_blocks=1 00:06:29.781 00:06:29.781 ' 00:06:29.781 11:22:28 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:29.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.781 --rc genhtml_branch_coverage=1 00:06:29.781 --rc genhtml_function_coverage=1 00:06:29.781 --rc genhtml_legend=1 00:06:29.781 --rc geninfo_all_blocks=1 00:06:29.781 --rc geninfo_unexecuted_blocks=1 00:06:29.781 00:06:29.781 ' 00:06:29.781 11:22:28 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:29.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.781 --rc genhtml_branch_coverage=1 00:06:29.781 --rc genhtml_function_coverage=1 00:06:29.781 --rc genhtml_legend=1 00:06:29.781 --rc geninfo_all_blocks=1 00:06:29.781 --rc geninfo_unexecuted_blocks=1 00:06:29.781 00:06:29.781 ' 00:06:29.781 11:22:28 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:29.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.781 --rc genhtml_branch_coverage=1 00:06:29.781 --rc genhtml_function_coverage=1 00:06:29.781 --rc genhtml_legend=1 00:06:29.781 --rc geninfo_all_blocks=1 00:06:29.781 --rc geninfo_unexecuted_blocks=1 00:06:29.781 00:06:29.781 ' 00:06:29.781 11:22:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:29.781 OK 00:06:30.040 11:22:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:30.040 00:06:30.040 real 0m0.291s 00:06:30.040 user 0m0.142s 00:06:30.040 sys 0m0.164s 00:06:30.040 ************************************ 00:06:30.040 END TEST rpc_client 00:06:30.040 ************************************ 00:06:30.040 11:22:29 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:30.040 11:22:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:30.040 11:22:29 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:30.040 11:22:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:30.041 11:22:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:30.041 11:22:29 -- common/autotest_common.sh@10 -- # set +x 00:06:30.041 ************************************ 00:06:30.041 START TEST json_config 00:06:30.041 ************************************ 00:06:30.041 11:22:29 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:30.041 11:22:29 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:30.041 11:22:29 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:30.041 11:22:29 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:30.041 11:22:29 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:30.041 11:22:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.041 11:22:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.041 11:22:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.041 11:22:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.041 11:22:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.041 11:22:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.041 11:22:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.041 11:22:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.041 11:22:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.041 11:22:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.041 11:22:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.041 11:22:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:30.041 11:22:29 json_config -- scripts/common.sh@345 -- # : 1 00:06:30.041 11:22:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.041 11:22:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.041 11:22:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:30.041 11:22:29 json_config -- scripts/common.sh@353 -- # local d=1 00:06:30.041 11:22:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.041 11:22:29 json_config -- scripts/common.sh@355 -- # echo 1 00:06:30.041 11:22:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.041 11:22:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:30.041 11:22:29 json_config -- scripts/common.sh@353 -- # local d=2 00:06:30.041 11:22:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.041 11:22:29 json_config -- scripts/common.sh@355 -- # echo 2 00:06:30.041 11:22:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.041 11:22:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.041 11:22:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.041 11:22:29 json_config -- scripts/common.sh@368 -- # return 0 00:06:30.041 11:22:29 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.041 11:22:29 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:30.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.041 --rc genhtml_branch_coverage=1 00:06:30.041 --rc genhtml_function_coverage=1 00:06:30.041 --rc genhtml_legend=1 00:06:30.041 --rc geninfo_all_blocks=1 00:06:30.041 --rc geninfo_unexecuted_blocks=1 00:06:30.041 00:06:30.041 ' 00:06:30.041 11:22:29 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:30.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.041 --rc genhtml_branch_coverage=1 00:06:30.041 --rc genhtml_function_coverage=1 00:06:30.041 --rc genhtml_legend=1 00:06:30.041 --rc geninfo_all_blocks=1 00:06:30.041 --rc geninfo_unexecuted_blocks=1 00:06:30.041 00:06:30.041 ' 00:06:30.041 11:22:29 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:30.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.041 --rc genhtml_branch_coverage=1 00:06:30.041 --rc genhtml_function_coverage=1 00:06:30.041 --rc genhtml_legend=1 00:06:30.041 --rc geninfo_all_blocks=1 00:06:30.041 --rc geninfo_unexecuted_blocks=1 00:06:30.041 00:06:30.041 ' 00:06:30.041 11:22:29 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:30.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.041 --rc genhtml_branch_coverage=1 00:06:30.041 --rc genhtml_function_coverage=1 00:06:30.041 --rc genhtml_legend=1 00:06:30.041 --rc geninfo_all_blocks=1 00:06:30.041 --rc geninfo_unexecuted_blocks=1 00:06:30.041 00:06:30.041 ' 00:06:30.041 11:22:29 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:30.041 11:22:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:30.041 11:22:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.041 11:22:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.041 11:22:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.041 11:22:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.041 11:22:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.041 11:22:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.041 11:22:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.041 11:22:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.041 11:22:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.041 11:22:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.300 11:22:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:634d121f-067e-4552-bd3e-8aec06a10c48 00:06:30.300 11:22:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=634d121f-067e-4552-bd3e-8aec06a10c48 00:06:30.300 11:22:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.300 11:22:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.300 11:22:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:30.300 11:22:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.300 11:22:29 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.300 11:22:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.300 11:22:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.300 11:22:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.300 11:22:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.301 11:22:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.301 11:22:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.301 11:22:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.301 11:22:29 json_config -- paths/export.sh@5 -- # export PATH 00:06:30.301 11:22:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.301 11:22:29 json_config -- nvmf/common.sh@51 -- # : 0 00:06:30.301 11:22:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.301 11:22:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.301 11:22:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.301 11:22:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.301 11:22:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.301 11:22:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.301 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.301 11:22:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.301 11:22:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.301 11:22:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.301 11:22:29 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:30.301 11:22:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:30.301 11:22:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:30.301 11:22:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:30.301 11:22:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:30.301 11:22:29 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:30.301 WARNING: No tests are enabled so not running JSON configuration tests 00:06:30.301 11:22:29 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:30.301 00:06:30.301 real 0m0.221s 00:06:30.301 user 0m0.129s 00:06:30.301 sys 0m0.093s 00:06:30.301 11:22:29 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:30.301 11:22:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.301 ************************************ 00:06:30.301 END TEST json_config 00:06:30.301 ************************************ 00:06:30.301 11:22:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:30.301 11:22:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:30.301 11:22:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:30.301 11:22:29 -- common/autotest_common.sh@10 -- # set +x 00:06:30.301 ************************************ 00:06:30.301 START TEST json_config_extra_key 00:06:30.301 ************************************ 00:06:30.301 11:22:29 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:30.301 11:22:29 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:30.301 11:22:29 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:30.301 11:22:29 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:30.301 11:22:29 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.301 11:22:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:30.301 11:22:29 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.301 11:22:29 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:30.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.301 --rc genhtml_branch_coverage=1 00:06:30.301 --rc genhtml_function_coverage=1 00:06:30.301 --rc genhtml_legend=1 00:06:30.301 --rc geninfo_all_blocks=1 00:06:30.301 --rc geninfo_unexecuted_blocks=1 00:06:30.301 00:06:30.301 ' 00:06:30.301 11:22:29 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:30.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.301 --rc genhtml_branch_coverage=1 00:06:30.301 --rc genhtml_function_coverage=1 00:06:30.301 --rc genhtml_legend=1 00:06:30.301 --rc geninfo_all_blocks=1 00:06:30.301 --rc geninfo_unexecuted_blocks=1 00:06:30.301 00:06:30.301 ' 00:06:30.301 11:22:29 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:30.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.301 --rc genhtml_branch_coverage=1 00:06:30.301 --rc genhtml_function_coverage=1 00:06:30.301 --rc genhtml_legend=1 00:06:30.301 --rc geninfo_all_blocks=1 00:06:30.301 --rc geninfo_unexecuted_blocks=1 00:06:30.301 00:06:30.301 ' 00:06:30.301 11:22:29 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:30.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.301 --rc genhtml_branch_coverage=1 00:06:30.301 --rc genhtml_function_coverage=1 00:06:30.301 --rc genhtml_legend=1 00:06:30.301 --rc geninfo_all_blocks=1 00:06:30.301 --rc geninfo_unexecuted_blocks=1 00:06:30.301 00:06:30.301 ' 00:06:30.301 11:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:30.301 11:22:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:30.301 11:22:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.301 11:22:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.301 11:22:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.301 11:22:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.301 11:22:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.301 11:22:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.301 11:22:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.301 11:22:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.301 11:22:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.301 11:22:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:634d121f-067e-4552-bd3e-8aec06a10c48 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=634d121f-067e-4552-bd3e-8aec06a10c48 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.560 11:22:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.560 11:22:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.560 11:22:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.560 11:22:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.560 11:22:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.560 11:22:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.560 11:22:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.560 11:22:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:30.560 11:22:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.560 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.560 11:22:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.560 11:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:30.560 11:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:30.560 11:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:30.560 11:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:30.560 11:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:30.560 11:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:30.560 11:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:30.560 11:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:30.560 11:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:30.560 11:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:30.560 11:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:30.560 INFO: launching applications... 00:06:30.560 11:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:30.560 11:22:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:30.560 11:22:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:30.560 11:22:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:30.560 11:22:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:30.560 11:22:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:30.560 11:22:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.560 11:22:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.560 11:22:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57796 00:06:30.560 Waiting for target to run... 00:06:30.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:30.560 11:22:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:30.560 11:22:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57796 /var/tmp/spdk_tgt.sock 00:06:30.560 11:22:29 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57796 ']' 00:06:30.560 11:22:29 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:30.560 11:22:29 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:30.560 11:22:29 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:30.560 11:22:29 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:30.560 11:22:29 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:30.560 11:22:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:30.560 [2024-11-05 11:22:29.711120] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:30.560 [2024-11-05 11:22:29.711255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57796 ] 00:06:31.128 [2024-11-05 11:22:30.103394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.128 [2024-11-05 11:22:30.217332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.694 00:06:31.694 INFO: shutting down applications... 00:06:31.694 11:22:30 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:31.694 11:22:30 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:06:31.694 11:22:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:31.694 11:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:31.694 11:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:31.694 11:22:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:31.694 11:22:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:31.694 11:22:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57796 ]] 00:06:31.694 11:22:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57796 00:06:31.694 11:22:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:31.694 11:22:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:31.694 11:22:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57796 00:06:31.694 11:22:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:32.260 11:22:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:32.260 11:22:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:32.260 11:22:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57796 00:06:32.260 11:22:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:32.899 11:22:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:32.899 11:22:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:32.899 11:22:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57796 00:06:32.899 11:22:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:33.467 11:22:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:33.467 11:22:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.467 11:22:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57796 00:06:33.467 11:22:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:33.725 11:22:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:33.725 11:22:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.725 11:22:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57796 00:06:33.725 11:22:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:34.291 11:22:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:34.291 11:22:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.291 11:22:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57796 00:06:34.291 11:22:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:34.858 11:22:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:34.858 11:22:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.858 SPDK target shutdown done 00:06:34.858 11:22:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57796 00:06:34.858 11:22:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:34.858 11:22:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:34.858 11:22:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:34.858 11:22:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:34.858 11:22:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:34.858 Success 00:06:34.858 00:06:34.858 real 0m4.584s 00:06:34.858 user 0m4.059s 00:06:34.858 sys 0m0.564s 00:06:34.858 11:22:33 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:34.858 11:22:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:34.858 ************************************ 00:06:34.858 END TEST json_config_extra_key 00:06:34.858 ************************************ 00:06:34.858 11:22:34 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:34.858 11:22:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:34.858 11:22:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:34.858 11:22:34 -- common/autotest_common.sh@10 -- # set +x 00:06:34.858 ************************************ 00:06:34.858 START TEST alias_rpc 00:06:34.858 ************************************ 00:06:34.858 11:22:34 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:35.116 * Looking for test storage... 00:06:35.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:35.116 11:22:34 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:35.116 11:22:34 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:35.116 11:22:34 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:35.116 11:22:34 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.116 11:22:34 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:35.116 11:22:34 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.116 11:22:34 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:35.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.116 --rc genhtml_branch_coverage=1 00:06:35.116 --rc genhtml_function_coverage=1 00:06:35.116 --rc genhtml_legend=1 00:06:35.116 --rc geninfo_all_blocks=1 00:06:35.116 --rc geninfo_unexecuted_blocks=1 00:06:35.116 00:06:35.116 ' 00:06:35.116 11:22:34 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:35.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.116 --rc genhtml_branch_coverage=1 00:06:35.116 --rc genhtml_function_coverage=1 00:06:35.116 --rc genhtml_legend=1 00:06:35.116 --rc geninfo_all_blocks=1 00:06:35.116 --rc geninfo_unexecuted_blocks=1 00:06:35.116 00:06:35.116 ' 00:06:35.116 11:22:34 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:35.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.116 --rc genhtml_branch_coverage=1 00:06:35.116 --rc genhtml_function_coverage=1 00:06:35.116 --rc genhtml_legend=1 00:06:35.116 --rc geninfo_all_blocks=1 00:06:35.116 --rc geninfo_unexecuted_blocks=1 00:06:35.116 00:06:35.116 ' 00:06:35.116 11:22:34 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:35.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.116 --rc genhtml_branch_coverage=1 00:06:35.116 --rc genhtml_function_coverage=1 00:06:35.116 --rc genhtml_legend=1 00:06:35.116 --rc geninfo_all_blocks=1 00:06:35.116 --rc geninfo_unexecuted_blocks=1 00:06:35.116 00:06:35.116 ' 00:06:35.117 11:22:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:35.117 11:22:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57902 00:06:35.117 11:22:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:35.117 11:22:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57902 00:06:35.117 11:22:34 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57902 ']' 00:06:35.117 11:22:34 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.117 11:22:34 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:35.117 11:22:34 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.117 11:22:34 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:35.117 11:22:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.117 [2024-11-05 11:22:34.359187] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:35.117 [2024-11-05 11:22:34.359382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57902 ] 00:06:35.375 [2024-11-05 11:22:34.519795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.375 [2024-11-05 11:22:34.630504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.310 11:22:35 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:36.310 11:22:35 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:36.310 11:22:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:36.571 11:22:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57902 00:06:36.571 11:22:35 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57902 ']' 00:06:36.571 11:22:35 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57902 00:06:36.571 11:22:35 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:36.571 11:22:35 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:36.571 11:22:35 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57902 00:06:36.571 11:22:35 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:36.571 11:22:35 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:36.571 11:22:35 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57902' 00:06:36.571 killing process with pid 57902 00:06:36.571 11:22:35 alias_rpc -- common/autotest_common.sh@971 -- # kill 57902 00:06:36.571 11:22:35 alias_rpc -- common/autotest_common.sh@976 -- # wait 57902 00:06:39.107 00:06:39.107 real 0m4.100s 00:06:39.107 user 0m4.112s 00:06:39.107 sys 0m0.546s 00:06:39.107 11:22:38 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.107 11:22:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.107 ************************************ 00:06:39.107 END TEST alias_rpc 00:06:39.107 ************************************ 00:06:39.107 11:22:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:39.107 11:22:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:39.107 11:22:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:39.107 11:22:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.107 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.107 ************************************ 00:06:39.107 START TEST spdkcli_tcp 00:06:39.107 ************************************ 00:06:39.107 11:22:38 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:39.107 * Looking for test storage... 00:06:39.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:39.107 11:22:38 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:39.107 11:22:38 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:39.107 11:22:38 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:39.366 11:22:38 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.366 11:22:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:39.366 11:22:38 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.366 11:22:38 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:39.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.366 --rc genhtml_branch_coverage=1 00:06:39.366 --rc genhtml_function_coverage=1 00:06:39.366 --rc genhtml_legend=1 00:06:39.366 --rc geninfo_all_blocks=1 00:06:39.366 --rc geninfo_unexecuted_blocks=1 00:06:39.366 00:06:39.366 ' 00:06:39.366 11:22:38 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:39.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.366 --rc genhtml_branch_coverage=1 00:06:39.366 --rc genhtml_function_coverage=1 00:06:39.366 --rc genhtml_legend=1 00:06:39.366 --rc geninfo_all_blocks=1 00:06:39.366 --rc geninfo_unexecuted_blocks=1 00:06:39.366 00:06:39.366 ' 00:06:39.366 11:22:38 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:39.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.366 --rc genhtml_branch_coverage=1 00:06:39.366 --rc genhtml_function_coverage=1 00:06:39.366 --rc genhtml_legend=1 00:06:39.366 --rc geninfo_all_blocks=1 00:06:39.366 --rc geninfo_unexecuted_blocks=1 00:06:39.366 00:06:39.366 ' 00:06:39.366 11:22:38 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:39.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.366 --rc genhtml_branch_coverage=1 00:06:39.366 --rc genhtml_function_coverage=1 00:06:39.366 --rc genhtml_legend=1 00:06:39.366 --rc geninfo_all_blocks=1 00:06:39.366 --rc geninfo_unexecuted_blocks=1 00:06:39.366 00:06:39.366 ' 00:06:39.366 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:39.366 11:22:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:39.366 11:22:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:39.366 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:39.366 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:39.366 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:39.366 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:39.366 11:22:38 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.366 11:22:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.366 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58015 00:06:39.366 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58015 00:06:39.366 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:39.366 11:22:38 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58015 ']' 00:06:39.366 11:22:38 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.366 11:22:38 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:39.366 11:22:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.366 11:22:38 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:39.366 11:22:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.366 [2024-11-05 11:22:38.547612] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:39.366 [2024-11-05 11:22:38.547837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58015 ] 00:06:39.625 [2024-11-05 11:22:38.741302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.625 [2024-11-05 11:22:38.855868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.625 [2024-11-05 11:22:38.855905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.566 11:22:39 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:40.566 11:22:39 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:40.566 11:22:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:40.566 11:22:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58032 00:06:40.566 11:22:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:40.831 [ 00:06:40.831 "bdev_malloc_delete", 00:06:40.831 "bdev_malloc_create", 00:06:40.831 "bdev_null_resize", 00:06:40.831 "bdev_null_delete", 00:06:40.831 "bdev_null_create", 00:06:40.831 "bdev_nvme_cuse_unregister", 00:06:40.831 "bdev_nvme_cuse_register", 00:06:40.831 "bdev_opal_new_user", 00:06:40.831 "bdev_opal_set_lock_state", 00:06:40.831 "bdev_opal_delete", 00:06:40.831 "bdev_opal_get_info", 00:06:40.831 "bdev_opal_create", 00:06:40.831 "bdev_nvme_opal_revert", 00:06:40.831 "bdev_nvme_opal_init", 00:06:40.831 "bdev_nvme_send_cmd", 00:06:40.831 "bdev_nvme_set_keys", 00:06:40.831 "bdev_nvme_get_path_iostat", 00:06:40.831 "bdev_nvme_get_mdns_discovery_info", 00:06:40.831 "bdev_nvme_stop_mdns_discovery", 00:06:40.831 "bdev_nvme_start_mdns_discovery", 00:06:40.831 "bdev_nvme_set_multipath_policy", 00:06:40.831 "bdev_nvme_set_preferred_path", 00:06:40.831 "bdev_nvme_get_io_paths", 00:06:40.831 "bdev_nvme_remove_error_injection", 00:06:40.831 "bdev_nvme_add_error_injection", 00:06:40.831 "bdev_nvme_get_discovery_info", 00:06:40.831 "bdev_nvme_stop_discovery", 00:06:40.831 "bdev_nvme_start_discovery", 00:06:40.831 "bdev_nvme_get_controller_health_info", 00:06:40.831 "bdev_nvme_disable_controller", 00:06:40.831 "bdev_nvme_enable_controller", 00:06:40.831 "bdev_nvme_reset_controller", 00:06:40.831 "bdev_nvme_get_transport_statistics", 00:06:40.831 "bdev_nvme_apply_firmware", 00:06:40.831 "bdev_nvme_detach_controller", 00:06:40.831 "bdev_nvme_get_controllers", 00:06:40.831 "bdev_nvme_attach_controller", 00:06:40.831 "bdev_nvme_set_hotplug", 00:06:40.831 "bdev_nvme_set_options", 00:06:40.831 "bdev_passthru_delete", 00:06:40.831 "bdev_passthru_create", 00:06:40.831 "bdev_lvol_set_parent_bdev", 00:06:40.831 "bdev_lvol_set_parent", 00:06:40.831 "bdev_lvol_check_shallow_copy", 00:06:40.832 "bdev_lvol_start_shallow_copy", 00:06:40.832 "bdev_lvol_grow_lvstore", 00:06:40.832 "bdev_lvol_get_lvols", 00:06:40.832 "bdev_lvol_get_lvstores", 00:06:40.832 "bdev_lvol_delete", 00:06:40.832 "bdev_lvol_set_read_only", 00:06:40.832 "bdev_lvol_resize", 00:06:40.832 "bdev_lvol_decouple_parent", 00:06:40.832 "bdev_lvol_inflate", 00:06:40.832 "bdev_lvol_rename", 00:06:40.832 "bdev_lvol_clone_bdev", 00:06:40.832 "bdev_lvol_clone", 00:06:40.832 "bdev_lvol_snapshot", 00:06:40.832 "bdev_lvol_create", 00:06:40.832 "bdev_lvol_delete_lvstore", 00:06:40.832 "bdev_lvol_rename_lvstore", 00:06:40.832 "bdev_lvol_create_lvstore", 00:06:40.832 "bdev_raid_set_options", 00:06:40.832 "bdev_raid_remove_base_bdev", 00:06:40.832 "bdev_raid_add_base_bdev", 00:06:40.832 "bdev_raid_delete", 00:06:40.832 "bdev_raid_create", 00:06:40.832 "bdev_raid_get_bdevs", 00:06:40.832 "bdev_error_inject_error", 00:06:40.832 "bdev_error_delete", 00:06:40.832 "bdev_error_create", 00:06:40.832 "bdev_split_delete", 00:06:40.832 "bdev_split_create", 00:06:40.832 "bdev_delay_delete", 00:06:40.832 "bdev_delay_create", 00:06:40.832 "bdev_delay_update_latency", 00:06:40.832 "bdev_zone_block_delete", 00:06:40.832 "bdev_zone_block_create", 00:06:40.832 "blobfs_create", 00:06:40.832 "blobfs_detect", 00:06:40.832 "blobfs_set_cache_size", 00:06:40.832 "bdev_aio_delete", 00:06:40.832 "bdev_aio_rescan", 00:06:40.832 "bdev_aio_create", 00:06:40.832 "bdev_ftl_set_property", 00:06:40.832 "bdev_ftl_get_properties", 00:06:40.832 "bdev_ftl_get_stats", 00:06:40.832 "bdev_ftl_unmap", 00:06:40.832 "bdev_ftl_unload", 00:06:40.832 "bdev_ftl_delete", 00:06:40.832 "bdev_ftl_load", 00:06:40.832 "bdev_ftl_create", 00:06:40.832 "bdev_virtio_attach_controller", 00:06:40.832 "bdev_virtio_scsi_get_devices", 00:06:40.832 "bdev_virtio_detach_controller", 00:06:40.832 "bdev_virtio_blk_set_hotplug", 00:06:40.832 "bdev_iscsi_delete", 00:06:40.832 "bdev_iscsi_create", 00:06:40.832 "bdev_iscsi_set_options", 00:06:40.832 "accel_error_inject_error", 00:06:40.832 "ioat_scan_accel_module", 00:06:40.832 "dsa_scan_accel_module", 00:06:40.832 "iaa_scan_accel_module", 00:06:40.832 "keyring_file_remove_key", 00:06:40.832 "keyring_file_add_key", 00:06:40.832 "keyring_linux_set_options", 00:06:40.832 "fsdev_aio_delete", 00:06:40.832 "fsdev_aio_create", 00:06:40.832 "iscsi_get_histogram", 00:06:40.832 "iscsi_enable_histogram", 00:06:40.832 "iscsi_set_options", 00:06:40.832 "iscsi_get_auth_groups", 00:06:40.832 "iscsi_auth_group_remove_secret", 00:06:40.832 "iscsi_auth_group_add_secret", 00:06:40.832 "iscsi_delete_auth_group", 00:06:40.832 "iscsi_create_auth_group", 00:06:40.832 "iscsi_set_discovery_auth", 00:06:40.832 "iscsi_get_options", 00:06:40.832 "iscsi_target_node_request_logout", 00:06:40.832 "iscsi_target_node_set_redirect", 00:06:40.832 "iscsi_target_node_set_auth", 00:06:40.832 "iscsi_target_node_add_lun", 00:06:40.832 "iscsi_get_stats", 00:06:40.832 "iscsi_get_connections", 00:06:40.832 "iscsi_portal_group_set_auth", 00:06:40.832 "iscsi_start_portal_group", 00:06:40.832 "iscsi_delete_portal_group", 00:06:40.832 "iscsi_create_portal_group", 00:06:40.832 "iscsi_get_portal_groups", 00:06:40.832 "iscsi_delete_target_node", 00:06:40.832 "iscsi_target_node_remove_pg_ig_maps", 00:06:40.832 "iscsi_target_node_add_pg_ig_maps", 00:06:40.832 "iscsi_create_target_node", 00:06:40.832 "iscsi_get_target_nodes", 00:06:40.832 "iscsi_delete_initiator_group", 00:06:40.832 "iscsi_initiator_group_remove_initiators", 00:06:40.832 "iscsi_initiator_group_add_initiators", 00:06:40.832 "iscsi_create_initiator_group", 00:06:40.832 "iscsi_get_initiator_groups", 00:06:40.832 "nvmf_set_crdt", 00:06:40.832 "nvmf_set_config", 00:06:40.832 "nvmf_set_max_subsystems", 00:06:40.832 "nvmf_stop_mdns_prr", 00:06:40.832 "nvmf_publish_mdns_prr", 00:06:40.832 "nvmf_subsystem_get_listeners", 00:06:40.832 "nvmf_subsystem_get_qpairs", 00:06:40.832 "nvmf_subsystem_get_controllers", 00:06:40.832 "nvmf_get_stats", 00:06:40.832 "nvmf_get_transports", 00:06:40.832 "nvmf_create_transport", 00:06:40.832 "nvmf_get_targets", 00:06:40.832 "nvmf_delete_target", 00:06:40.832 "nvmf_create_target", 00:06:40.832 "nvmf_subsystem_allow_any_host", 00:06:40.832 "nvmf_subsystem_set_keys", 00:06:40.832 "nvmf_subsystem_remove_host", 00:06:40.832 "nvmf_subsystem_add_host", 00:06:40.832 "nvmf_ns_remove_host", 00:06:40.832 "nvmf_ns_add_host", 00:06:40.832 "nvmf_subsystem_remove_ns", 00:06:40.832 "nvmf_subsystem_set_ns_ana_group", 00:06:40.832 "nvmf_subsystem_add_ns", 00:06:40.832 "nvmf_subsystem_listener_set_ana_state", 00:06:40.832 "nvmf_discovery_get_referrals", 00:06:40.832 "nvmf_discovery_remove_referral", 00:06:40.832 "nvmf_discovery_add_referral", 00:06:40.832 "nvmf_subsystem_remove_listener", 00:06:40.832 "nvmf_subsystem_add_listener", 00:06:40.832 "nvmf_delete_subsystem", 00:06:40.832 "nvmf_create_subsystem", 00:06:40.832 "nvmf_get_subsystems", 00:06:40.832 "env_dpdk_get_mem_stats", 00:06:40.832 "nbd_get_disks", 00:06:40.832 "nbd_stop_disk", 00:06:40.832 "nbd_start_disk", 00:06:40.832 "ublk_recover_disk", 00:06:40.832 "ublk_get_disks", 00:06:40.832 "ublk_stop_disk", 00:06:40.832 "ublk_start_disk", 00:06:40.832 "ublk_destroy_target", 00:06:40.832 "ublk_create_target", 00:06:40.832 "virtio_blk_create_transport", 00:06:40.832 "virtio_blk_get_transports", 00:06:40.832 "vhost_controller_set_coalescing", 00:06:40.832 "vhost_get_controllers", 00:06:40.832 "vhost_delete_controller", 00:06:40.832 "vhost_create_blk_controller", 00:06:40.832 "vhost_scsi_controller_remove_target", 00:06:40.832 "vhost_scsi_controller_add_target", 00:06:40.832 "vhost_start_scsi_controller", 00:06:40.832 "vhost_create_scsi_controller", 00:06:40.832 "thread_set_cpumask", 00:06:40.832 "scheduler_set_options", 00:06:40.832 "framework_get_governor", 00:06:40.832 "framework_get_scheduler", 00:06:40.832 "framework_set_scheduler", 00:06:40.832 "framework_get_reactors", 00:06:40.832 "thread_get_io_channels", 00:06:40.832 "thread_get_pollers", 00:06:40.832 "thread_get_stats", 00:06:40.832 "framework_monitor_context_switch", 00:06:40.832 "spdk_kill_instance", 00:06:40.832 "log_enable_timestamps", 00:06:40.832 "log_get_flags", 00:06:40.832 "log_clear_flag", 00:06:40.832 "log_set_flag", 00:06:40.832 "log_get_level", 00:06:40.832 "log_set_level", 00:06:40.832 "log_get_print_level", 00:06:40.832 "log_set_print_level", 00:06:40.832 "framework_enable_cpumask_locks", 00:06:40.832 "framework_disable_cpumask_locks", 00:06:40.832 "framework_wait_init", 00:06:40.832 "framework_start_init", 00:06:40.832 "scsi_get_devices", 00:06:40.832 "bdev_get_histogram", 00:06:40.832 "bdev_enable_histogram", 00:06:40.832 "bdev_set_qos_limit", 00:06:40.832 "bdev_set_qd_sampling_period", 00:06:40.832 "bdev_get_bdevs", 00:06:40.832 "bdev_reset_iostat", 00:06:40.832 "bdev_get_iostat", 00:06:40.832 "bdev_examine", 00:06:40.832 "bdev_wait_for_examine", 00:06:40.832 "bdev_set_options", 00:06:40.832 "accel_get_stats", 00:06:40.832 "accel_set_options", 00:06:40.832 "accel_set_driver", 00:06:40.832 "accel_crypto_key_destroy", 00:06:40.832 "accel_crypto_keys_get", 00:06:40.832 "accel_crypto_key_create", 00:06:40.832 "accel_assign_opc", 00:06:40.832 "accel_get_module_info", 00:06:40.832 "accel_get_opc_assignments", 00:06:40.832 "vmd_rescan", 00:06:40.832 "vmd_remove_device", 00:06:40.832 "vmd_enable", 00:06:40.832 "sock_get_default_impl", 00:06:40.832 "sock_set_default_impl", 00:06:40.832 "sock_impl_set_options", 00:06:40.832 "sock_impl_get_options", 00:06:40.832 "iobuf_get_stats", 00:06:40.832 "iobuf_set_options", 00:06:40.832 "keyring_get_keys", 00:06:40.832 "framework_get_pci_devices", 00:06:40.832 "framework_get_config", 00:06:40.832 "framework_get_subsystems", 00:06:40.832 "fsdev_set_opts", 00:06:40.832 "fsdev_get_opts", 00:06:40.832 "trace_get_info", 00:06:40.832 "trace_get_tpoint_group_mask", 00:06:40.832 "trace_disable_tpoint_group", 00:06:40.832 "trace_enable_tpoint_group", 00:06:40.832 "trace_clear_tpoint_mask", 00:06:40.832 "trace_set_tpoint_mask", 00:06:40.832 "notify_get_notifications", 00:06:40.832 "notify_get_types", 00:06:40.832 "spdk_get_version", 00:06:40.832 "rpc_get_methods" 00:06:40.832 ] 00:06:40.832 11:22:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:40.832 11:22:39 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.832 11:22:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.832 11:22:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:40.832 11:22:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58015 00:06:40.832 11:22:39 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58015 ']' 00:06:40.832 11:22:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58015 00:06:40.832 11:22:39 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:40.832 11:22:40 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:40.832 11:22:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58015 00:06:40.832 11:22:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:40.832 11:22:40 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:40.832 11:22:40 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58015' 00:06:40.832 killing process with pid 58015 00:06:40.832 11:22:40 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58015 00:06:40.832 11:22:40 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58015 00:06:43.370 00:06:43.370 real 0m4.277s 00:06:43.370 user 0m7.617s 00:06:43.370 sys 0m0.610s 00:06:43.370 11:22:42 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.370 11:22:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:43.370 ************************************ 00:06:43.370 END TEST spdkcli_tcp 00:06:43.370 ************************************ 00:06:43.370 11:22:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:43.370 11:22:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:43.370 11:22:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.370 11:22:42 -- common/autotest_common.sh@10 -- # set +x 00:06:43.370 ************************************ 00:06:43.370 START TEST dpdk_mem_utility 00:06:43.370 ************************************ 00:06:43.370 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:43.629 * Looking for test storage... 00:06:43.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:43.629 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:43.629 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:43.629 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:43.629 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.629 11:22:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:43.630 11:22:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.630 11:22:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:43.630 11:22:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:43.630 11:22:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.630 11:22:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:43.630 11:22:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.630 11:22:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.630 11:22:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.630 11:22:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:43.630 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.630 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:43.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.630 --rc genhtml_branch_coverage=1 00:06:43.630 --rc genhtml_function_coverage=1 00:06:43.630 --rc genhtml_legend=1 00:06:43.630 --rc geninfo_all_blocks=1 00:06:43.630 --rc geninfo_unexecuted_blocks=1 00:06:43.630 00:06:43.630 ' 00:06:43.630 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:43.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.630 --rc genhtml_branch_coverage=1 00:06:43.630 --rc genhtml_function_coverage=1 00:06:43.630 --rc genhtml_legend=1 00:06:43.630 --rc geninfo_all_blocks=1 00:06:43.630 --rc geninfo_unexecuted_blocks=1 00:06:43.630 00:06:43.630 ' 00:06:43.630 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:43.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.630 --rc genhtml_branch_coverage=1 00:06:43.630 --rc genhtml_function_coverage=1 00:06:43.630 --rc genhtml_legend=1 00:06:43.630 --rc geninfo_all_blocks=1 00:06:43.630 --rc geninfo_unexecuted_blocks=1 00:06:43.630 00:06:43.630 ' 00:06:43.630 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:43.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.630 --rc genhtml_branch_coverage=1 00:06:43.630 --rc genhtml_function_coverage=1 00:06:43.630 --rc genhtml_legend=1 00:06:43.630 --rc geninfo_all_blocks=1 00:06:43.630 --rc geninfo_unexecuted_blocks=1 00:06:43.630 00:06:43.630 ' 00:06:43.630 11:22:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:43.630 11:22:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58137 00:06:43.630 11:22:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.630 11:22:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58137 00:06:43.630 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58137 ']' 00:06:43.630 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.630 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:43.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.630 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.630 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:43.630 11:22:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:43.888 [2024-11-05 11:22:42.919712] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:43.888 [2024-11-05 11:22:42.919853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58137 ] 00:06:43.888 [2024-11-05 11:22:43.084145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.147 [2024-11-05 11:22:43.220876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.084 11:22:44 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:45.084 11:22:44 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:45.084 11:22:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:45.084 11:22:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:45.084 11:22:44 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.084 11:22:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:45.084 { 00:06:45.084 "filename": "/tmp/spdk_mem_dump.txt" 00:06:45.084 } 00:06:45.084 11:22:44 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.084 11:22:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:45.084 DPDK memory size 816.000000 MiB in 1 heap(s) 00:06:45.084 1 heaps totaling size 816.000000 MiB 00:06:45.084 size: 816.000000 MiB heap id: 0 00:06:45.084 end heaps---------- 00:06:45.084 9 mempools totaling size 595.772034 MiB 00:06:45.084 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:45.084 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:45.084 size: 92.545471 MiB name: bdev_io_58137 00:06:45.084 size: 50.003479 MiB name: msgpool_58137 00:06:45.084 size: 36.509338 MiB name: fsdev_io_58137 00:06:45.084 size: 21.763794 MiB name: PDU_Pool 00:06:45.084 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:45.084 size: 4.133484 MiB name: evtpool_58137 00:06:45.084 size: 0.026123 MiB name: Session_Pool 00:06:45.084 end mempools------- 00:06:45.084 6 memzones totaling size 4.142822 MiB 00:06:45.084 size: 1.000366 MiB name: RG_ring_0_58137 00:06:45.084 size: 1.000366 MiB name: RG_ring_1_58137 00:06:45.084 size: 1.000366 MiB name: RG_ring_4_58137 00:06:45.084 size: 1.000366 MiB name: RG_ring_5_58137 00:06:45.084 size: 0.125366 MiB name: RG_ring_2_58137 00:06:45.084 size: 0.015991 MiB name: RG_ring_3_58137 00:06:45.084 end memzones------- 00:06:45.084 11:22:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:45.345 heap id: 0 total size: 816.000000 MiB number of busy elements: 320 number of free elements: 18 00:06:45.345 list of free elements. size: 16.790161 MiB 00:06:45.345 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:45.345 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:45.345 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:45.345 element at address: 0x200018d00040 with size: 0.999939 MiB 00:06:45.345 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:45.345 element at address: 0x200019200000 with size: 0.999084 MiB 00:06:45.345 element at address: 0x200031e00000 with size: 0.994324 MiB 00:06:45.345 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:45.345 element at address: 0x200018a00000 with size: 0.959656 MiB 00:06:45.345 element at address: 0x200019500040 with size: 0.936401 MiB 00:06:45.345 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:45.345 element at address: 0x20001ac00000 with size: 0.560486 MiB 00:06:45.345 element at address: 0x200000c00000 with size: 0.490173 MiB 00:06:45.345 element at address: 0x200018e00000 with size: 0.487976 MiB 00:06:45.345 element at address: 0x200019600000 with size: 0.485413 MiB 00:06:45.345 element at address: 0x200012c00000 with size: 0.443481 MiB 00:06:45.345 element at address: 0x200028000000 with size: 0.390442 MiB 00:06:45.345 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:45.345 list of standard malloc elements. size: 199.288940 MiB 00:06:45.345 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:45.345 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:45.345 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:06:45.345 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:45.345 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:45.345 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:45.345 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:06:45.345 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:45.345 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:45.345 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:06:45.345 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:45.345 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:45.345 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:45.345 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:45.345 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:45.345 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:45.345 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:45.345 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:45.345 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012c71880 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012c71980 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012c72080 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012c72180 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:06:45.346 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:06:45.347 element at address: 0x200028063f40 with size: 0.000244 MiB 00:06:45.347 element at address: 0x200028064040 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806af80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806b080 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806b180 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806b280 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806b380 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806b480 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806b580 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806b680 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806b780 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806b880 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806b980 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806be80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806c080 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806c180 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806c280 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806c380 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806c480 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806c580 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806c680 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806c780 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806c880 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806c980 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806d080 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806d180 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806d280 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806d380 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806d480 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806d580 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806d680 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806d780 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806d880 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806d980 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806da80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806db80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806de80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806df80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806e080 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806e180 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806e280 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806e380 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806e480 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806e580 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806e680 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806e780 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806e880 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806e980 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806f080 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806f180 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806f280 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806f380 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806f480 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806f580 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806f680 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806f780 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806f880 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806f980 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:06:45.347 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:06:45.347 list of memzone associated elements. size: 599.920898 MiB 00:06:45.348 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:06:45.348 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:45.348 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:06:45.348 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:45.348 element at address: 0x200012df4740 with size: 92.045105 MiB 00:06:45.348 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58137_0 00:06:45.348 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:45.348 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58137_0 00:06:45.348 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:45.348 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58137_0 00:06:45.348 element at address: 0x2000197be900 with size: 20.255615 MiB 00:06:45.348 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:45.348 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:06:45.348 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:45.348 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:45.348 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58137_0 00:06:45.348 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:45.348 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58137 00:06:45.348 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:45.348 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58137 00:06:45.348 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:45.348 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:45.348 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:06:45.348 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:45.348 element at address: 0x200018afde00 with size: 1.008179 MiB 00:06:45.348 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:45.348 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:06:45.348 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:45.348 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:45.348 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58137 00:06:45.348 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:45.348 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58137 00:06:45.348 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:06:45.348 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58137 00:06:45.348 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:06:45.348 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58137 00:06:45.348 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:45.348 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58137 00:06:45.348 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:45.348 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58137 00:06:45.348 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:06:45.348 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:45.348 element at address: 0x200012c72280 with size: 0.500549 MiB 00:06:45.348 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:45.348 element at address: 0x20001967c440 with size: 0.250549 MiB 00:06:45.348 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:45.348 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:45.348 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58137 00:06:45.348 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:45.348 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58137 00:06:45.348 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:06:45.348 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:45.348 element at address: 0x200028064140 with size: 0.023804 MiB 00:06:45.348 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:45.348 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:45.348 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58137 00:06:45.348 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:06:45.348 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:45.348 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:45.348 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58137 00:06:45.348 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:45.348 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58137 00:06:45.348 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:45.348 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58137 00:06:45.348 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:06:45.348 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:45.348 11:22:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:45.348 11:22:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58137 00:06:45.348 11:22:44 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58137 ']' 00:06:45.348 11:22:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58137 00:06:45.348 11:22:44 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:45.348 11:22:44 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:45.348 11:22:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58137 00:06:45.348 11:22:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:45.348 11:22:44 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:45.348 11:22:44 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58137' 00:06:45.348 killing process with pid 58137 00:06:45.348 11:22:44 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58137 00:06:45.348 11:22:44 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58137 00:06:48.635 00:06:48.635 real 0m4.702s 00:06:48.635 user 0m4.668s 00:06:48.635 sys 0m0.604s 00:06:48.635 ************************************ 00:06:48.635 END TEST dpdk_mem_utility 00:06:48.635 ************************************ 00:06:48.635 11:22:47 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.635 11:22:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:48.635 11:22:47 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:48.635 11:22:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.635 11:22:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.635 11:22:47 -- common/autotest_common.sh@10 -- # set +x 00:06:48.635 ************************************ 00:06:48.635 START TEST event 00:06:48.635 ************************************ 00:06:48.635 11:22:47 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:48.635 * Looking for test storage... 00:06:48.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:48.635 11:22:47 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:48.635 11:22:47 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:48.635 11:22:47 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:48.635 11:22:47 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:48.635 11:22:47 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.635 11:22:47 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.635 11:22:47 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.635 11:22:47 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.635 11:22:47 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.635 11:22:47 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.635 11:22:47 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.635 11:22:47 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.635 11:22:47 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.635 11:22:47 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.635 11:22:47 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.635 11:22:47 event -- scripts/common.sh@344 -- # case "$op" in 00:06:48.635 11:22:47 event -- scripts/common.sh@345 -- # : 1 00:06:48.635 11:22:47 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.635 11:22:47 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.635 11:22:47 event -- scripts/common.sh@365 -- # decimal 1 00:06:48.635 11:22:47 event -- scripts/common.sh@353 -- # local d=1 00:06:48.635 11:22:47 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.635 11:22:47 event -- scripts/common.sh@355 -- # echo 1 00:06:48.635 11:22:47 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.635 11:22:47 event -- scripts/common.sh@366 -- # decimal 2 00:06:48.635 11:22:47 event -- scripts/common.sh@353 -- # local d=2 00:06:48.635 11:22:47 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.635 11:22:47 event -- scripts/common.sh@355 -- # echo 2 00:06:48.635 11:22:47 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.635 11:22:47 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.635 11:22:47 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.635 11:22:47 event -- scripts/common.sh@368 -- # return 0 00:06:48.635 11:22:47 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.635 11:22:47 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:48.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.635 --rc genhtml_branch_coverage=1 00:06:48.635 --rc genhtml_function_coverage=1 00:06:48.635 --rc genhtml_legend=1 00:06:48.635 --rc geninfo_all_blocks=1 00:06:48.635 --rc geninfo_unexecuted_blocks=1 00:06:48.635 00:06:48.635 ' 00:06:48.635 11:22:47 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:48.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.635 --rc genhtml_branch_coverage=1 00:06:48.635 --rc genhtml_function_coverage=1 00:06:48.635 --rc genhtml_legend=1 00:06:48.635 --rc geninfo_all_blocks=1 00:06:48.635 --rc geninfo_unexecuted_blocks=1 00:06:48.635 00:06:48.635 ' 00:06:48.635 11:22:47 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:48.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.635 --rc genhtml_branch_coverage=1 00:06:48.635 --rc genhtml_function_coverage=1 00:06:48.635 --rc genhtml_legend=1 00:06:48.635 --rc geninfo_all_blocks=1 00:06:48.635 --rc geninfo_unexecuted_blocks=1 00:06:48.635 00:06:48.635 ' 00:06:48.635 11:22:47 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:48.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.635 --rc genhtml_branch_coverage=1 00:06:48.635 --rc genhtml_function_coverage=1 00:06:48.635 --rc genhtml_legend=1 00:06:48.635 --rc geninfo_all_blocks=1 00:06:48.635 --rc geninfo_unexecuted_blocks=1 00:06:48.635 00:06:48.635 ' 00:06:48.635 11:22:47 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:48.635 11:22:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:48.635 11:22:47 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:48.635 11:22:47 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:48.635 11:22:47 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.635 11:22:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.636 ************************************ 00:06:48.636 START TEST event_perf 00:06:48.636 ************************************ 00:06:48.636 11:22:47 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:48.636 Running I/O for 1 seconds...[2024-11-05 11:22:47.613810] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:48.636 [2024-11-05 11:22:47.613958] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58250 ] 00:06:48.636 [2024-11-05 11:22:47.798932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.895 Running I/O for 1 seconds...[2024-11-05 11:22:47.917429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.895 [2024-11-05 11:22:47.917639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.895 [2024-11-05 11:22:47.917855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.895 [2024-11-05 11:22:47.917903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.273 00:06:50.273 lcore 0: 88657 00:06:50.273 lcore 1: 88654 00:06:50.273 lcore 2: 88650 00:06:50.273 lcore 3: 88653 00:06:50.273 done. 00:06:50.273 00:06:50.273 real 0m1.599s 00:06:50.273 user 0m4.340s 00:06:50.273 sys 0m0.129s 00:06:50.273 11:22:49 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.273 11:22:49 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:50.273 ************************************ 00:06:50.273 END TEST event_perf 00:06:50.273 ************************************ 00:06:50.273 11:22:49 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:50.273 11:22:49 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:50.273 11:22:49 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.273 11:22:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.273 ************************************ 00:06:50.273 START TEST event_reactor 00:06:50.273 ************************************ 00:06:50.273 11:22:49 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:50.273 [2024-11-05 11:22:49.281670] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:50.273 [2024-11-05 11:22:49.281938] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58290 ] 00:06:50.273 [2024-11-05 11:22:49.464648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.534 [2024-11-05 11:22:49.605440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.913 test_start 00:06:51.913 oneshot 00:06:51.913 tick 100 00:06:51.913 tick 100 00:06:51.913 tick 250 00:06:51.913 tick 100 00:06:51.913 tick 100 00:06:51.913 tick 100 00:06:51.913 tick 250 00:06:51.913 tick 500 00:06:51.913 tick 100 00:06:51.913 tick 100 00:06:51.913 tick 250 00:06:51.913 tick 100 00:06:51.913 tick 100 00:06:51.913 test_end 00:06:51.913 00:06:51.913 real 0m1.624s 00:06:51.913 user 0m1.403s 00:06:51.913 sys 0m0.110s 00:06:51.913 11:22:50 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:51.913 11:22:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:51.913 ************************************ 00:06:51.913 END TEST event_reactor 00:06:51.913 ************************************ 00:06:51.913 11:22:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:51.913 11:22:50 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:51.913 11:22:50 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:51.913 11:22:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.913 ************************************ 00:06:51.913 START TEST event_reactor_perf 00:06:51.913 ************************************ 00:06:51.913 11:22:50 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:51.913 [2024-11-05 11:22:50.967867] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:51.913 [2024-11-05 11:22:50.968022] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58332 ] 00:06:51.913 [2024-11-05 11:22:51.131397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.171 [2024-11-05 11:22:51.270978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.572 test_start 00:06:53.572 test_end 00:06:53.572 Performance: 292682 events per second 00:06:53.572 00:06:53.572 real 0m1.621s 00:06:53.572 user 0m1.426s 00:06:53.572 sys 0m0.084s 00:06:53.572 11:22:52 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:53.572 11:22:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.572 ************************************ 00:06:53.572 END TEST event_reactor_perf 00:06:53.572 ************************************ 00:06:53.572 11:22:52 event -- event/event.sh@49 -- # uname -s 00:06:53.572 11:22:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:53.572 11:22:52 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:53.572 11:22:52 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:53.572 11:22:52 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:53.572 11:22:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.572 ************************************ 00:06:53.572 START TEST event_scheduler 00:06:53.572 ************************************ 00:06:53.572 11:22:52 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:53.572 * Looking for test storage... 00:06:53.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:53.572 11:22:52 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:53.572 11:22:52 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:53.572 11:22:52 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:53.572 11:22:52 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:53.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.572 11:22:52 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:53.572 11:22:52 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.572 11:22:52 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:53.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.572 --rc genhtml_branch_coverage=1 00:06:53.572 --rc genhtml_function_coverage=1 00:06:53.572 --rc genhtml_legend=1 00:06:53.572 --rc geninfo_all_blocks=1 00:06:53.572 --rc geninfo_unexecuted_blocks=1 00:06:53.572 00:06:53.572 ' 00:06:53.572 11:22:52 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:53.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.572 --rc genhtml_branch_coverage=1 00:06:53.572 --rc genhtml_function_coverage=1 00:06:53.572 --rc genhtml_legend=1 00:06:53.572 --rc geninfo_all_blocks=1 00:06:53.572 --rc geninfo_unexecuted_blocks=1 00:06:53.572 00:06:53.572 ' 00:06:53.572 11:22:52 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:53.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.572 --rc genhtml_branch_coverage=1 00:06:53.572 --rc genhtml_function_coverage=1 00:06:53.572 --rc genhtml_legend=1 00:06:53.572 --rc geninfo_all_blocks=1 00:06:53.572 --rc geninfo_unexecuted_blocks=1 00:06:53.572 00:06:53.572 ' 00:06:53.572 11:22:52 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:53.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.572 --rc genhtml_branch_coverage=1 00:06:53.572 --rc genhtml_function_coverage=1 00:06:53.572 --rc genhtml_legend=1 00:06:53.572 --rc geninfo_all_blocks=1 00:06:53.572 --rc geninfo_unexecuted_blocks=1 00:06:53.573 00:06:53.573 ' 00:06:53.573 11:22:52 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:53.573 11:22:52 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58402 00:06:53.573 11:22:52 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:53.573 11:22:52 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58402 00:06:53.573 11:22:52 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:53.573 11:22:52 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58402 ']' 00:06:53.573 11:22:52 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.573 11:22:52 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.573 11:22:52 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.573 11:22:52 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.573 11:22:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:53.831 [2024-11-05 11:22:52.912934] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:53.831 [2024-11-05 11:22:52.913108] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58402 ] 00:06:53.831 [2024-11-05 11:22:53.091283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.089 [2024-11-05 11:22:53.270082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.089 [2024-11-05 11:22:53.270421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.089 [2024-11-05 11:22:53.270369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.089 [2024-11-05 11:22:53.270246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.656 11:22:53 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.656 11:22:53 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:54.656 11:22:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:54.656 11:22:53 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.656 11:22:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.656 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.656 POWER: Cannot set governor of lcore 0 to userspace 00:06:54.656 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.656 POWER: Cannot set governor of lcore 0 to performance 00:06:54.656 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.656 POWER: Cannot set governor of lcore 0 to userspace 00:06:54.656 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.656 POWER: Cannot set governor of lcore 0 to userspace 00:06:54.656 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:54.656 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:54.656 POWER: Unable to set Power Management Environment for lcore 0 00:06:54.656 [2024-11-05 11:22:53.860475] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:54.656 [2024-11-05 11:22:53.860505] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:54.656 [2024-11-05 11:22:53.860518] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:54.656 [2024-11-05 11:22:53.860560] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:54.656 [2024-11-05 11:22:53.860573] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:54.656 [2024-11-05 11:22:53.860586] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:54.656 11:22:53 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.656 11:22:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:54.656 11:22:53 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.656 11:22:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:55.224 [2024-11-05 11:22:54.280491] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:55.224 11:22:54 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.224 11:22:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:55.224 11:22:54 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:55.224 11:22:54 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:55.224 11:22:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:55.224 ************************************ 00:06:55.224 START TEST scheduler_create_thread 00:06:55.224 ************************************ 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.224 2 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.224 3 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.224 4 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.224 5 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.224 6 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.224 7 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.224 8 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.224 9 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.224 10 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.224 11:22:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.606 11:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.606 11:22:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:56.606 11:22:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:56.606 11:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.606 11:22:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.555 11:22:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.555 11:22:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:57.555 11:22:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.555 11:22:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.493 11:22:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.493 11:22:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:58.493 11:22:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:58.493 11:22:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.493 11:22:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.062 ************************************ 00:06:59.062 END TEST scheduler_create_thread 00:06:59.062 ************************************ 00:06:59.062 11:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.062 00:06:59.062 real 0m3.881s 00:06:59.062 user 0m0.028s 00:06:59.062 sys 0m0.006s 00:06:59.062 11:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.062 11:22:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.062 11:22:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:59.062 11:22:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58402 00:06:59.062 11:22:58 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58402 ']' 00:06:59.062 11:22:58 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58402 00:06:59.062 11:22:58 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:59.062 11:22:58 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:59.062 11:22:58 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58402 00:06:59.062 killing process with pid 58402 00:06:59.062 11:22:58 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:59.062 11:22:58 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:59.062 11:22:58 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58402' 00:06:59.062 11:22:58 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58402 00:06:59.062 11:22:58 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58402 00:06:59.321 [2024-11-05 11:22:58.548523] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:00.697 00:07:00.697 real 0m7.310s 00:07:00.697 user 0m15.861s 00:07:00.697 sys 0m0.589s 00:07:00.697 11:22:59 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.697 ************************************ 00:07:00.697 END TEST event_scheduler 00:07:00.697 ************************************ 00:07:00.697 11:22:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:00.697 11:22:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:00.697 11:22:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:00.697 11:22:59 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:00.697 11:22:59 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.697 11:22:59 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.956 ************************************ 00:07:00.956 START TEST app_repeat 00:07:00.956 ************************************ 00:07:00.956 11:22:59 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:07:00.956 11:22:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.956 11:22:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.956 11:22:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:00.956 11:22:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.956 11:22:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:00.956 11:22:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:00.956 11:22:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:00.956 11:22:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58525 00:07:00.956 11:22:59 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:00.956 11:22:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:00.956 11:22:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58525' 00:07:00.956 Process app_repeat pid: 58525 00:07:00.956 spdk_app_start Round 0 00:07:00.956 11:22:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:00.956 11:22:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:00.956 11:22:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58525 /var/tmp/spdk-nbd.sock 00:07:00.956 11:22:59 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58525 ']' 00:07:00.956 11:22:59 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.956 11:22:59 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:00.956 11:22:59 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.956 11:22:59 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:00.956 11:22:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:00.956 [2024-11-05 11:23:00.051710] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:00.956 [2024-11-05 11:23:00.051812] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58525 ] 00:07:00.956 [2024-11-05 11:23:00.227734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.216 [2024-11-05 11:23:00.345851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.216 [2024-11-05 11:23:00.345885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.813 11:23:00 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:01.813 11:23:00 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:01.813 11:23:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.072 Malloc0 00:07:02.072 11:23:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.332 Malloc1 00:07:02.332 11:23:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.332 11:23:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.332 11:23:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.332 11:23:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:02.332 11:23:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.333 11:23:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:02.333 11:23:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.333 11:23:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.333 11:23:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.333 11:23:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:02.333 11:23:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.333 11:23:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:02.333 11:23:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:02.333 11:23:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:02.333 11:23:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.333 11:23:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:02.591 /dev/nbd0 00:07:02.591 11:23:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:02.591 11:23:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:02.591 11:23:01 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:02.591 11:23:01 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:02.591 11:23:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:02.591 11:23:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:02.591 11:23:01 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:02.591 11:23:01 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:02.591 11:23:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:02.591 11:23:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:02.591 11:23:01 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.591 1+0 records in 00:07:02.591 1+0 records out 00:07:02.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499827 s, 8.2 MB/s 00:07:02.591 11:23:01 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:02.591 11:23:01 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:02.591 11:23:01 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:02.591 11:23:01 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:02.591 11:23:01 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:02.591 11:23:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.591 11:23:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.591 11:23:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:02.850 /dev/nbd1 00:07:02.850 11:23:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:02.850 11:23:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:02.850 11:23:02 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:02.850 11:23:02 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:02.850 11:23:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:02.850 11:23:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:02.850 11:23:02 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:02.850 11:23:02 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:02.850 11:23:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:02.850 11:23:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:02.850 11:23:02 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.850 1+0 records in 00:07:02.850 1+0 records out 00:07:02.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395751 s, 10.3 MB/s 00:07:02.850 11:23:02 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:02.850 11:23:02 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:02.850 11:23:02 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:02.850 11:23:02 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:02.850 11:23:02 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:02.850 11:23:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.850 11:23:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.850 11:23:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.850 11:23:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.850 11:23:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:03.108 { 00:07:03.108 "nbd_device": "/dev/nbd0", 00:07:03.108 "bdev_name": "Malloc0" 00:07:03.108 }, 00:07:03.108 { 00:07:03.108 "nbd_device": "/dev/nbd1", 00:07:03.108 "bdev_name": "Malloc1" 00:07:03.108 } 00:07:03.108 ]' 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:03.108 { 00:07:03.108 "nbd_device": "/dev/nbd0", 00:07:03.108 "bdev_name": "Malloc0" 00:07:03.108 }, 00:07:03.108 { 00:07:03.108 "nbd_device": "/dev/nbd1", 00:07:03.108 "bdev_name": "Malloc1" 00:07:03.108 } 00:07:03.108 ]' 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:03.108 /dev/nbd1' 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:03.108 /dev/nbd1' 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:03.108 256+0 records in 00:07:03.108 256+0 records out 00:07:03.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00527537 s, 199 MB/s 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:03.108 256+0 records in 00:07:03.108 256+0 records out 00:07:03.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226363 s, 46.3 MB/s 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:03.108 256+0 records in 00:07:03.108 256+0 records out 00:07:03.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245851 s, 42.7 MB/s 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.108 11:23:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.368 11:23:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.628 11:23:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.628 11:23:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.628 11:23:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.628 11:23:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.628 11:23:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.628 11:23:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.628 11:23:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.628 11:23:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.628 11:23:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.628 11:23:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.628 11:23:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.888 11:23:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.888 11:23:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.888 11:23:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.888 11:23:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.888 11:23:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.888 11:23:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.888 11:23:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:03.888 11:23:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.888 11:23:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.888 11:23:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.888 11:23:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.888 11:23:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.888 11:23:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:04.458 11:23:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:05.837 [2024-11-05 11:23:04.864671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.837 [2024-11-05 11:23:04.978628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.837 [2024-11-05 11:23:04.978630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.105 [2024-11-05 11:23:05.192562] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:06.105 [2024-11-05 11:23:05.192636] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:07.516 11:23:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:07.516 spdk_app_start Round 1 00:07:07.516 11:23:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:07.516 11:23:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58525 /var/tmp/spdk-nbd.sock 00:07:07.516 11:23:06 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58525 ']' 00:07:07.516 11:23:06 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.516 11:23:06 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:07.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.516 11:23:06 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.516 11:23:06 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:07.516 11:23:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:07.516 11:23:06 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:07.516 11:23:06 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:07.516 11:23:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.776 Malloc0 00:07:07.776 11:23:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:08.036 Malloc1 00:07:08.036 11:23:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:08.036 11:23:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.036 11:23:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:08.036 11:23:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:08.036 11:23:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.036 11:23:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:08.036 11:23:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:08.036 11:23:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.036 11:23:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:08.036 11:23:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:08.036 11:23:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.296 11:23:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:08.296 11:23:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:08.296 11:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:08.296 11:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.296 11:23:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:08.296 /dev/nbd0 00:07:08.556 11:23:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:08.556 11:23:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:08.556 11:23:07 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:08.556 11:23:07 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:08.556 11:23:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:08.556 11:23:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:08.556 11:23:07 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:08.556 11:23:07 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:08.556 11:23:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:08.556 11:23:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:08.556 11:23:07 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:08.556 1+0 records in 00:07:08.556 1+0 records out 00:07:08.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505928 s, 8.1 MB/s 00:07:08.556 11:23:07 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.556 11:23:07 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:08.556 11:23:07 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.556 11:23:07 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:08.556 11:23:07 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:08.556 11:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.556 11:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.556 11:23:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:08.815 /dev/nbd1 00:07:08.815 11:23:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:08.816 11:23:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:08.816 11:23:07 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:08.816 11:23:07 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:08.816 11:23:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:08.816 11:23:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:08.816 11:23:07 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:08.816 11:23:07 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:08.816 11:23:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:08.816 11:23:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:08.816 11:23:07 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:08.816 1+0 records in 00:07:08.816 1+0 records out 00:07:08.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425953 s, 9.6 MB/s 00:07:08.816 11:23:07 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.816 11:23:07 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:08.816 11:23:07 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.816 11:23:07 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:08.816 11:23:07 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:08.816 11:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.816 11:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.816 11:23:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.816 11:23:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.816 11:23:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:09.076 { 00:07:09.076 "nbd_device": "/dev/nbd0", 00:07:09.076 "bdev_name": "Malloc0" 00:07:09.076 }, 00:07:09.076 { 00:07:09.076 "nbd_device": "/dev/nbd1", 00:07:09.076 "bdev_name": "Malloc1" 00:07:09.076 } 00:07:09.076 ]' 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:09.076 { 00:07:09.076 "nbd_device": "/dev/nbd0", 00:07:09.076 "bdev_name": "Malloc0" 00:07:09.076 }, 00:07:09.076 { 00:07:09.076 "nbd_device": "/dev/nbd1", 00:07:09.076 "bdev_name": "Malloc1" 00:07:09.076 } 00:07:09.076 ]' 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:09.076 /dev/nbd1' 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:09.076 /dev/nbd1' 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:09.076 256+0 records in 00:07:09.076 256+0 records out 00:07:09.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00510929 s, 205 MB/s 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:09.076 256+0 records in 00:07:09.076 256+0 records out 00:07:09.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273522 s, 38.3 MB/s 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:09.076 256+0 records in 00:07:09.076 256+0 records out 00:07:09.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249708 s, 42.0 MB/s 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.076 11:23:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.336 11:23:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.336 11:23:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.336 11:23:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.336 11:23:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.336 11:23:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.336 11:23:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.336 11:23:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:09.336 11:23:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.336 11:23:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.336 11:23:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:09.596 11:23:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:09.596 11:23:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:09.596 11:23:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:09.596 11:23:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.596 11:23:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.596 11:23:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:09.596 11:23:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:09.596 11:23:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.596 11:23:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.596 11:23:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.596 11:23:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.855 11:23:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:09.855 11:23:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:09.855 11:23:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.855 11:23:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:09.855 11:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.855 11:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:09.855 11:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:09.855 11:23:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:09.855 11:23:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:09.855 11:23:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:09.855 11:23:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:09.855 11:23:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:09.855 11:23:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:10.500 11:23:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:11.878 [2024-11-05 11:23:10.783624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.878 [2024-11-05 11:23:10.914764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.878 [2024-11-05 11:23:10.914787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.878 [2024-11-05 11:23:11.129691] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:11.878 [2024-11-05 11:23:11.129792] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:13.779 spdk_app_start Round 2 00:07:13.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:13.779 11:23:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:13.779 11:23:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:13.779 11:23:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58525 /var/tmp/spdk-nbd.sock 00:07:13.779 11:23:12 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58525 ']' 00:07:13.779 11:23:12 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:13.779 11:23:12 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:13.779 11:23:12 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:13.779 11:23:12 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:13.779 11:23:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:13.779 11:23:12 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:13.779 11:23:12 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:13.779 11:23:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:14.036 Malloc0 00:07:14.036 11:23:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:14.600 Malloc1 00:07:14.600 11:23:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:14.600 11:23:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.600 11:23:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:14.600 11:23:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:14.600 11:23:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.600 11:23:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:14.600 11:23:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:14.600 11:23:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.600 11:23:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:14.600 11:23:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:14.600 11:23:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.600 11:23:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:14.600 11:23:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:14.600 11:23:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:14.600 11:23:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:14.600 11:23:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:14.858 /dev/nbd0 00:07:14.858 11:23:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:14.858 11:23:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:14.858 11:23:13 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:14.858 11:23:13 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:14.858 11:23:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:14.858 11:23:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:14.858 11:23:13 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:14.858 11:23:13 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:14.858 11:23:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:14.858 11:23:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:14.858 11:23:13 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:14.858 1+0 records in 00:07:14.858 1+0 records out 00:07:14.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284093 s, 14.4 MB/s 00:07:14.858 11:23:13 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:14.858 11:23:13 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:14.858 11:23:13 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:14.858 11:23:13 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:14.858 11:23:13 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:14.858 11:23:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.858 11:23:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:14.858 11:23:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:15.115 /dev/nbd1 00:07:15.115 11:23:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:15.115 11:23:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:15.115 11:23:14 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:15.115 11:23:14 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:15.115 11:23:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:15.115 11:23:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:15.115 11:23:14 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:15.115 11:23:14 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:15.115 11:23:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:15.115 11:23:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:15.115 11:23:14 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:15.115 1+0 records in 00:07:15.115 1+0 records out 00:07:15.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274344 s, 14.9 MB/s 00:07:15.115 11:23:14 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:15.115 11:23:14 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:15.115 11:23:14 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:15.115 11:23:14 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:15.115 11:23:14 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:15.115 11:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.115 11:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.115 11:23:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.115 11:23:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.115 11:23:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:15.373 { 00:07:15.373 "nbd_device": "/dev/nbd0", 00:07:15.373 "bdev_name": "Malloc0" 00:07:15.373 }, 00:07:15.373 { 00:07:15.373 "nbd_device": "/dev/nbd1", 00:07:15.373 "bdev_name": "Malloc1" 00:07:15.373 } 00:07:15.373 ]' 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:15.373 { 00:07:15.373 "nbd_device": "/dev/nbd0", 00:07:15.373 "bdev_name": "Malloc0" 00:07:15.373 }, 00:07:15.373 { 00:07:15.373 "nbd_device": "/dev/nbd1", 00:07:15.373 "bdev_name": "Malloc1" 00:07:15.373 } 00:07:15.373 ]' 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:15.373 /dev/nbd1' 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:15.373 /dev/nbd1' 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:15.373 256+0 records in 00:07:15.373 256+0 records out 00:07:15.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494449 s, 212 MB/s 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:15.373 256+0 records in 00:07:15.373 256+0 records out 00:07:15.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249849 s, 42.0 MB/s 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.373 11:23:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:15.632 256+0 records in 00:07:15.632 256+0 records out 00:07:15.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022361 s, 46.9 MB/s 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:15.632 11:23:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.633 11:23:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:15.633 11:23:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:15.633 11:23:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:15.633 11:23:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:15.633 11:23:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.633 11:23:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.633 11:23:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:15.633 11:23:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:15.633 11:23:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.633 11:23:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.633 11:23:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:15.892 11:23:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:15.892 11:23:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:15.892 11:23:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:15.892 11:23:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.892 11:23:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.892 11:23:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:15.892 11:23:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:15.892 11:23:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.892 11:23:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.892 11:23:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.892 11:23:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.152 11:23:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:16.152 11:23:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:16.152 11:23:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.152 11:23:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:16.152 11:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:16.152 11:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.152 11:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:16.152 11:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:16.152 11:23:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:16.152 11:23:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:16.152 11:23:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:16.152 11:23:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:16.152 11:23:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:16.804 11:23:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:17.773 [2024-11-05 11:23:16.952208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.033 [2024-11-05 11:23:17.070043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.033 [2024-11-05 11:23:17.070046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.033 [2024-11-05 11:23:17.264831] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:18.033 [2024-11-05 11:23:17.264914] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:19.935 11:23:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58525 /var/tmp/spdk-nbd.sock 00:07:19.935 11:23:18 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58525 ']' 00:07:19.935 11:23:18 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:19.935 11:23:18 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:19.935 11:23:18 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:19.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:19.935 11:23:18 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:19.935 11:23:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:19.935 11:23:19 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:19.935 11:23:19 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:19.935 11:23:19 event.app_repeat -- event/event.sh@39 -- # killprocess 58525 00:07:19.935 11:23:19 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58525 ']' 00:07:19.935 11:23:19 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58525 00:07:19.935 11:23:19 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:07:19.935 11:23:19 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:19.935 11:23:19 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58525 00:07:19.935 killing process with pid 58525 00:07:19.935 11:23:19 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:19.935 11:23:19 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:19.935 11:23:19 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58525' 00:07:19.935 11:23:19 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58525 00:07:19.935 11:23:19 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58525 00:07:20.868 spdk_app_start is called in Round 0. 00:07:20.868 Shutdown signal received, stop current app iteration 00:07:20.868 Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 reinitialization... 00:07:20.868 spdk_app_start is called in Round 1. 00:07:20.868 Shutdown signal received, stop current app iteration 00:07:20.868 Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 reinitialization... 00:07:20.868 spdk_app_start is called in Round 2. 00:07:20.868 Shutdown signal received, stop current app iteration 00:07:20.868 Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 reinitialization... 00:07:20.868 spdk_app_start is called in Round 3. 00:07:20.868 Shutdown signal received, stop current app iteration 00:07:20.868 11:23:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:20.868 11:23:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:20.868 00:07:20.868 real 0m20.142s 00:07:20.868 user 0m43.534s 00:07:20.868 sys 0m2.925s 00:07:20.868 11:23:20 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:20.868 11:23:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:20.868 ************************************ 00:07:20.868 END TEST app_repeat 00:07:20.868 ************************************ 00:07:21.125 11:23:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:21.125 11:23:20 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:21.125 11:23:20 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:21.125 11:23:20 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.125 11:23:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:21.125 ************************************ 00:07:21.125 START TEST cpu_locks 00:07:21.125 ************************************ 00:07:21.125 11:23:20 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:21.125 * Looking for test storage... 00:07:21.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:21.125 11:23:20 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:21.126 11:23:20 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:21.126 11:23:20 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:07:21.126 11:23:20 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:21.126 11:23:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.384 11:23:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:21.384 11:23:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:21.384 11:23:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.384 11:23:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:21.384 11:23:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.384 11:23:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.384 11:23:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.384 11:23:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:21.384 11:23:20 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.384 11:23:20 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:21.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.384 --rc genhtml_branch_coverage=1 00:07:21.384 --rc genhtml_function_coverage=1 00:07:21.384 --rc genhtml_legend=1 00:07:21.384 --rc geninfo_all_blocks=1 00:07:21.384 --rc geninfo_unexecuted_blocks=1 00:07:21.384 00:07:21.384 ' 00:07:21.384 11:23:20 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:21.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.384 --rc genhtml_branch_coverage=1 00:07:21.384 --rc genhtml_function_coverage=1 00:07:21.384 --rc genhtml_legend=1 00:07:21.384 --rc geninfo_all_blocks=1 00:07:21.384 --rc geninfo_unexecuted_blocks=1 00:07:21.384 00:07:21.384 ' 00:07:21.384 11:23:20 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:21.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.384 --rc genhtml_branch_coverage=1 00:07:21.384 --rc genhtml_function_coverage=1 00:07:21.384 --rc genhtml_legend=1 00:07:21.384 --rc geninfo_all_blocks=1 00:07:21.384 --rc geninfo_unexecuted_blocks=1 00:07:21.384 00:07:21.384 ' 00:07:21.384 11:23:20 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:21.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.384 --rc genhtml_branch_coverage=1 00:07:21.384 --rc genhtml_function_coverage=1 00:07:21.384 --rc genhtml_legend=1 00:07:21.384 --rc geninfo_all_blocks=1 00:07:21.384 --rc geninfo_unexecuted_blocks=1 00:07:21.384 00:07:21.384 ' 00:07:21.384 11:23:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:21.384 11:23:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:21.384 11:23:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:21.384 11:23:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:21.384 11:23:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:21.384 11:23:20 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.384 11:23:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.384 ************************************ 00:07:21.384 START TEST default_locks 00:07:21.384 ************************************ 00:07:21.384 11:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:07:21.384 11:23:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58985 00:07:21.384 11:23:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58985 00:07:21.385 11:23:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.385 11:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58985 ']' 00:07:21.385 11:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.385 11:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:21.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.385 11:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.385 11:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:21.385 11:23:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.385 [2024-11-05 11:23:20.525922] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:21.385 [2024-11-05 11:23:20.526057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58985 ] 00:07:21.644 [2024-11-05 11:23:20.699618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.644 [2024-11-05 11:23:20.816207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.589 11:23:21 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:22.589 11:23:21 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:07:22.589 11:23:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58985 00:07:22.589 11:23:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58985 00:07:22.589 11:23:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:22.848 11:23:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58985 00:07:22.848 11:23:21 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58985 ']' 00:07:22.848 11:23:21 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58985 00:07:22.849 11:23:21 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:07:22.849 11:23:21 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:22.849 11:23:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58985 00:07:22.849 11:23:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:22.849 11:23:21 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:22.849 killing process with pid 58985 00:07:22.849 11:23:21 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58985' 00:07:22.849 11:23:21 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58985 00:07:22.849 11:23:21 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58985 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58985 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58985 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58985 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58985 ']' 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.400 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58985) - No such process 00:07:25.400 ERROR: process (pid: 58985) is no longer running 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:25.400 00:07:25.400 real 0m4.228s 00:07:25.400 user 0m4.151s 00:07:25.400 sys 0m0.585s 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:25.400 ************************************ 00:07:25.400 END TEST default_locks 00:07:25.400 ************************************ 00:07:25.400 11:23:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.659 11:23:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:25.659 11:23:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:25.659 11:23:24 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:25.659 11:23:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.659 ************************************ 00:07:25.659 START TEST default_locks_via_rpc 00:07:25.659 ************************************ 00:07:25.659 11:23:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:07:25.659 11:23:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59066 00:07:25.659 11:23:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:25.659 11:23:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59066 00:07:25.659 11:23:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59066 ']' 00:07:25.659 11:23:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.659 11:23:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:25.659 11:23:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.659 11:23:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:25.659 11:23:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.659 [2024-11-05 11:23:24.843900] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:25.659 [2024-11-05 11:23:24.844079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59066 ] 00:07:25.918 [2024-11-05 11:23:25.029433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.918 [2024-11-05 11:23:25.171311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59066 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59066 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59066 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 59066 ']' 00:07:27.298 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 59066 00:07:27.557 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:07:27.557 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:27.557 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59066 00:07:27.557 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:27.557 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:27.557 killing process with pid 59066 00:07:27.557 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59066' 00:07:27.557 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 59066 00:07:27.557 11:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 59066 00:07:30.095 00:07:30.095 real 0m4.351s 00:07:30.095 user 0m4.137s 00:07:30.095 sys 0m0.810s 00:07:30.095 11:23:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:30.095 11:23:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.095 ************************************ 00:07:30.095 END TEST default_locks_via_rpc 00:07:30.095 ************************************ 00:07:30.095 11:23:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:30.095 11:23:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:30.095 11:23:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:30.095 11:23:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.095 ************************************ 00:07:30.095 START TEST non_locking_app_on_locked_coremask 00:07:30.095 ************************************ 00:07:30.095 11:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:07:30.095 11:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59140 00:07:30.095 11:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.095 11:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59140 /var/tmp/spdk.sock 00:07:30.095 11:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59140 ']' 00:07:30.095 11:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.095 11:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:30.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.095 11:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.095 11:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:30.095 11:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.095 [2024-11-05 11:23:29.235844] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:30.095 [2024-11-05 11:23:29.235963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59140 ] 00:07:30.356 [2024-11-05 11:23:29.409828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.356 [2024-11-05 11:23:29.520844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.291 11:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:31.291 11:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:31.291 11:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59158 00:07:31.291 11:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:31.291 11:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59158 /var/tmp/spdk2.sock 00:07:31.291 11:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59158 ']' 00:07:31.291 11:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.291 11:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:31.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.291 11:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.291 11:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:31.291 11:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.291 [2024-11-05 11:23:30.483576] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:31.291 [2024-11-05 11:23:30.483705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59158 ] 00:07:31.550 [2024-11-05 11:23:30.658233] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.550 [2024-11-05 11:23:30.658286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.808 [2024-11-05 11:23:30.893420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.344 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:34.344 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:34.344 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59140 00:07:34.344 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59140 00:07:34.344 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:34.603 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59140 00:07:34.603 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59140 ']' 00:07:34.603 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59140 00:07:34.603 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:34.603 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:34.603 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59140 00:07:34.603 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:34.603 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:34.603 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59140' 00:07:34.603 killing process with pid 59140 00:07:34.603 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59140 00:07:34.603 11:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59140 00:07:39.878 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59158 00:07:39.878 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59158 ']' 00:07:39.878 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59158 00:07:39.878 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:39.878 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:39.878 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59158 00:07:39.878 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:39.878 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:39.878 killing process with pid 59158 00:07:39.878 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59158' 00:07:39.879 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59158 00:07:39.879 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59158 00:07:41.786 00:07:41.786 real 0m11.894s 00:07:41.786 user 0m12.164s 00:07:41.786 sys 0m1.341s 00:07:41.786 11:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:41.786 11:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.786 ************************************ 00:07:41.786 END TEST non_locking_app_on_locked_coremask 00:07:41.786 ************************************ 00:07:42.045 11:23:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:42.045 11:23:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:42.045 11:23:41 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:42.045 11:23:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.045 ************************************ 00:07:42.045 START TEST locking_app_on_unlocked_coremask 00:07:42.045 ************************************ 00:07:42.045 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:07:42.045 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59307 00:07:42.045 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:42.045 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59307 /var/tmp/spdk.sock 00:07:42.045 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59307 ']' 00:07:42.045 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.045 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:42.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.045 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.045 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:42.045 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.045 [2024-11-05 11:23:41.198849] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:42.045 [2024-11-05 11:23:41.198983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59307 ] 00:07:42.306 [2024-11-05 11:23:41.373140] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:42.306 [2024-11-05 11:23:41.373248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.306 [2024-11-05 11:23:41.492708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.245 11:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:43.245 11:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:43.245 11:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:43.245 11:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59329 00:07:43.245 11:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59329 /var/tmp/spdk2.sock 00:07:43.245 11:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59329 ']' 00:07:43.245 11:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:43.245 11:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:43.245 11:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:43.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:43.245 11:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:43.245 11:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.245 [2024-11-05 11:23:42.445837] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:43.245 [2024-11-05 11:23:42.446023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59329 ] 00:07:43.504 [2024-11-05 11:23:42.621935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.764 [2024-11-05 11:23:42.851080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59329 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59329 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59307 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59307 ']' 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59307 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59307 00:07:46.302 killing process with pid 59307 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59307' 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59307 00:07:46.302 11:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59307 00:07:51.580 11:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59329 00:07:51.580 11:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59329 ']' 00:07:51.580 11:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59329 00:07:51.580 11:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:51.580 11:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:51.580 11:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59329 00:07:51.580 killing process with pid 59329 00:07:51.580 11:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:51.580 11:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:51.581 11:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59329' 00:07:51.581 11:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59329 00:07:51.581 11:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59329 00:07:53.490 ************************************ 00:07:53.490 END TEST locking_app_on_unlocked_coremask 00:07:53.490 ************************************ 00:07:53.490 00:07:53.490 real 0m11.644s 00:07:53.490 user 0m11.920s 00:07:53.490 sys 0m1.176s 00:07:53.490 11:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:53.490 11:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.749 11:23:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:53.749 11:23:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:53.749 11:23:52 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:53.749 11:23:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:53.749 ************************************ 00:07:53.749 START TEST locking_app_on_locked_coremask 00:07:53.749 ************************************ 00:07:53.749 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:07:53.749 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59477 00:07:53.749 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59477 /var/tmp/spdk.sock 00:07:53.749 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:53.749 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59477 ']' 00:07:53.749 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.749 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:53.749 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.749 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:53.749 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.749 [2024-11-05 11:23:52.906026] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:53.749 [2024-11-05 11:23:52.906275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59477 ] 00:07:54.008 [2024-11-05 11:23:53.080465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.008 [2024-11-05 11:23:53.195672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59493 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59493 /var/tmp/spdk2.sock 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59493 /var/tmp/spdk2.sock 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59493 /var/tmp/spdk2.sock 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59493 ']' 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:54.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:54.976 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.976 [2024-11-05 11:23:54.169662] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:54.976 [2024-11-05 11:23:54.169863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59493 ] 00:07:55.236 [2024-11-05 11:23:54.346679] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59477 has claimed it. 00:07:55.236 [2024-11-05 11:23:54.346749] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:55.804 ERROR: process (pid: 59493) is no longer running 00:07:55.804 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59493) - No such process 00:07:55.804 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:55.804 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:55.804 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:55.804 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:55.804 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:55.804 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:55.804 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59477 00:07:55.804 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59477 00:07:55.804 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:56.064 11:23:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59477 00:07:56.064 11:23:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59477 ']' 00:07:56.064 11:23:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59477 00:07:56.064 11:23:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:56.064 11:23:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:56.064 11:23:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59477 00:07:56.064 11:23:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:56.064 11:23:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:56.064 11:23:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59477' 00:07:56.064 killing process with pid 59477 00:07:56.064 11:23:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59477 00:07:56.064 11:23:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59477 00:07:58.602 ************************************ 00:07:58.602 END TEST locking_app_on_locked_coremask 00:07:58.602 00:07:58.602 real 0m4.801s 00:07:58.602 user 0m5.005s 00:07:58.602 sys 0m0.731s 00:07:58.603 11:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.603 11:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.603 ************************************ 00:07:58.603 11:23:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:58.603 11:23:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:58.603 11:23:57 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.603 11:23:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.603 ************************************ 00:07:58.603 START TEST locking_overlapped_coremask 00:07:58.603 ************************************ 00:07:58.603 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:07:58.603 11:23:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59568 00:07:58.603 11:23:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:58.603 11:23:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59568 /var/tmp/spdk.sock 00:07:58.603 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59568 ']' 00:07:58.603 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.603 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:58.603 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.603 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:58.603 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.603 [2024-11-05 11:23:57.791776] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:58.603 [2024-11-05 11:23:57.792027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59568 ] 00:07:58.861 [2024-11-05 11:23:57.972935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:58.861 [2024-11-05 11:23:58.091539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.861 [2024-11-05 11:23:58.091679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.861 [2024-11-05 11:23:58.091718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.794 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:59.794 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:59.794 11:23:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59586 00:07:59.794 11:23:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:59.794 11:23:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59586 /var/tmp/spdk2.sock 00:07:59.794 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:59.794 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59586 /var/tmp/spdk2.sock 00:07:59.794 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:59.794 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.794 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:59.794 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.794 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59586 /var/tmp/spdk2.sock 00:07:59.795 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59586 ']' 00:07:59.795 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:59.795 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:59.795 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:59.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:59.795 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:59.795 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.054 [2024-11-05 11:23:59.138552] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:00.054 [2024-11-05 11:23:59.138812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59586 ] 00:08:00.054 [2024-11-05 11:23:59.321821] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59568 has claimed it. 00:08:00.054 [2024-11-05 11:23:59.321920] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:00.620 ERROR: process (pid: 59586) is no longer running 00:08:00.620 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59586) - No such process 00:08:00.620 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59568 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59568 ']' 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59568 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59568 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59568' 00:08:00.621 killing process with pid 59568 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59568 00:08:00.621 11:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59568 00:08:03.159 00:08:03.159 real 0m4.694s 00:08:03.159 user 0m12.789s 00:08:03.159 sys 0m0.647s 00:08:03.159 11:24:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.159 11:24:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.159 ************************************ 00:08:03.159 END TEST locking_overlapped_coremask 00:08:03.159 ************************************ 00:08:03.159 11:24:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:03.159 11:24:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:03.159 11:24:02 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:03.159 11:24:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:03.159 ************************************ 00:08:03.159 START TEST locking_overlapped_coremask_via_rpc 00:08:03.159 ************************************ 00:08:03.419 11:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:08:03.419 11:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59652 00:08:03.419 11:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59652 /var/tmp/spdk.sock 00:08:03.419 11:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59652 ']' 00:08:03.419 11:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.419 11:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:03.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.419 11:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:03.419 11:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.419 11:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:03.419 11:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.419 [2024-11-05 11:24:02.537632] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:03.419 [2024-11-05 11:24:02.537838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59652 ] 00:08:03.679 [2024-11-05 11:24:02.703061] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:03.679 [2024-11-05 11:24:02.703137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:03.679 [2024-11-05 11:24:02.826585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.679 [2024-11-05 11:24:02.826735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.679 [2024-11-05 11:24:02.826787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:04.618 11:24:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:04.618 11:24:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:04.618 11:24:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59674 00:08:04.618 11:24:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59674 /var/tmp/spdk2.sock 00:08:04.618 11:24:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:04.618 11:24:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59674 ']' 00:08:04.618 11:24:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:04.618 11:24:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:04.618 11:24:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:04.618 11:24:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:04.618 11:24:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.618 [2024-11-05 11:24:03.824793] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:04.618 [2024-11-05 11:24:03.824935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59674 ] 00:08:04.878 [2024-11-05 11:24:04.004465] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:04.878 [2024-11-05 11:24:04.004543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.137 [2024-11-05 11:24:04.311010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.137 [2024-11-05 11:24:04.314343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.137 [2024-11-05 11:24:04.314382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.676 [2024-11-05 11:24:06.483411] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59652 has claimed it. 00:08:07.676 request: 00:08:07.676 { 00:08:07.676 "method": "framework_enable_cpumask_locks", 00:08:07.676 "req_id": 1 00:08:07.676 } 00:08:07.676 Got JSON-RPC error response 00:08:07.676 response: 00:08:07.676 { 00:08:07.676 "code": -32603, 00:08:07.676 "message": "Failed to claim CPU core: 2" 00:08:07.676 } 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59652 /var/tmp/spdk.sock 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59652 ']' 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59674 /var/tmp/spdk2.sock 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59674 ']' 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:07.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:07.676 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.936 ************************************ 00:08:07.937 END TEST locking_overlapped_coremask_via_rpc 00:08:07.937 ************************************ 00:08:07.937 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:07.937 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:07.937 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:07.937 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:07.937 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:07.937 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:07.937 00:08:07.937 real 0m4.551s 00:08:07.937 user 0m1.407s 00:08:07.937 sys 0m0.211s 00:08:07.937 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.937 11:24:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.937 11:24:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:07.937 11:24:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59652 ]] 00:08:07.937 11:24:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59652 00:08:07.937 11:24:07 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59652 ']' 00:08:07.937 11:24:07 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59652 00:08:07.937 11:24:07 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:07.937 11:24:07 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:07.937 11:24:07 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59652 00:08:07.937 11:24:07 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:07.937 11:24:07 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:07.937 11:24:07 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59652' 00:08:07.937 killing process with pid 59652 00:08:07.937 11:24:07 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59652 00:08:07.937 11:24:07 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59652 00:08:10.477 11:24:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59674 ]] 00:08:10.477 11:24:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59674 00:08:10.477 11:24:09 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59674 ']' 00:08:10.477 11:24:09 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59674 00:08:10.478 11:24:09 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:10.478 11:24:09 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:10.478 11:24:09 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59674 00:08:10.478 killing process with pid 59674 00:08:10.478 11:24:09 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:10.478 11:24:09 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:10.478 11:24:09 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59674' 00:08:10.478 11:24:09 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59674 00:08:10.478 11:24:09 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59674 00:08:13.020 11:24:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:13.020 Process with pid 59652 is not found 00:08:13.020 Process with pid 59674 is not found 00:08:13.020 11:24:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:13.020 11:24:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59652 ]] 00:08:13.020 11:24:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59652 00:08:13.020 11:24:11 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59652 ']' 00:08:13.020 11:24:11 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59652 00:08:13.020 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59652) - No such process 00:08:13.020 11:24:11 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59652 is not found' 00:08:13.020 11:24:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59674 ]] 00:08:13.020 11:24:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59674 00:08:13.020 11:24:11 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59674 ']' 00:08:13.020 11:24:11 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59674 00:08:13.020 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59674) - No such process 00:08:13.020 11:24:11 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59674 is not found' 00:08:13.020 11:24:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:13.020 00:08:13.020 real 0m51.788s 00:08:13.020 user 1m28.183s 00:08:13.020 sys 0m6.932s 00:08:13.020 11:24:11 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:13.020 11:24:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:13.020 ************************************ 00:08:13.020 END TEST cpu_locks 00:08:13.020 ************************************ 00:08:13.020 00:08:13.020 real 1m24.722s 00:08:13.020 user 2m35.004s 00:08:13.020 sys 0m11.158s 00:08:13.020 11:24:12 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:13.020 11:24:12 event -- common/autotest_common.sh@10 -- # set +x 00:08:13.020 ************************************ 00:08:13.020 END TEST event 00:08:13.020 ************************************ 00:08:13.020 11:24:12 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:13.020 11:24:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:13.020 11:24:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:13.020 11:24:12 -- common/autotest_common.sh@10 -- # set +x 00:08:13.020 ************************************ 00:08:13.020 START TEST thread 00:08:13.020 ************************************ 00:08:13.020 11:24:12 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:13.020 * Looking for test storage... 00:08:13.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:13.020 11:24:12 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:13.020 11:24:12 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:08:13.020 11:24:12 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:13.279 11:24:12 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:13.279 11:24:12 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.279 11:24:12 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.279 11:24:12 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.279 11:24:12 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.279 11:24:12 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.279 11:24:12 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.279 11:24:12 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.279 11:24:12 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.279 11:24:12 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.279 11:24:12 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.279 11:24:12 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.279 11:24:12 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:13.279 11:24:12 thread -- scripts/common.sh@345 -- # : 1 00:08:13.279 11:24:12 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.279 11:24:12 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.279 11:24:12 thread -- scripts/common.sh@365 -- # decimal 1 00:08:13.279 11:24:12 thread -- scripts/common.sh@353 -- # local d=1 00:08:13.279 11:24:12 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.279 11:24:12 thread -- scripts/common.sh@355 -- # echo 1 00:08:13.279 11:24:12 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.279 11:24:12 thread -- scripts/common.sh@366 -- # decimal 2 00:08:13.279 11:24:12 thread -- scripts/common.sh@353 -- # local d=2 00:08:13.279 11:24:12 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.279 11:24:12 thread -- scripts/common.sh@355 -- # echo 2 00:08:13.279 11:24:12 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.279 11:24:12 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.279 11:24:12 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.279 11:24:12 thread -- scripts/common.sh@368 -- # return 0 00:08:13.279 11:24:12 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.279 11:24:12 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:13.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.279 --rc genhtml_branch_coverage=1 00:08:13.279 --rc genhtml_function_coverage=1 00:08:13.279 --rc genhtml_legend=1 00:08:13.279 --rc geninfo_all_blocks=1 00:08:13.279 --rc geninfo_unexecuted_blocks=1 00:08:13.279 00:08:13.279 ' 00:08:13.279 11:24:12 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:13.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.279 --rc genhtml_branch_coverage=1 00:08:13.279 --rc genhtml_function_coverage=1 00:08:13.279 --rc genhtml_legend=1 00:08:13.279 --rc geninfo_all_blocks=1 00:08:13.279 --rc geninfo_unexecuted_blocks=1 00:08:13.279 00:08:13.279 ' 00:08:13.279 11:24:12 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:13.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.279 --rc genhtml_branch_coverage=1 00:08:13.279 --rc genhtml_function_coverage=1 00:08:13.279 --rc genhtml_legend=1 00:08:13.279 --rc geninfo_all_blocks=1 00:08:13.279 --rc geninfo_unexecuted_blocks=1 00:08:13.279 00:08:13.279 ' 00:08:13.279 11:24:12 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:13.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.279 --rc genhtml_branch_coverage=1 00:08:13.279 --rc genhtml_function_coverage=1 00:08:13.279 --rc genhtml_legend=1 00:08:13.279 --rc geninfo_all_blocks=1 00:08:13.279 --rc geninfo_unexecuted_blocks=1 00:08:13.279 00:08:13.279 ' 00:08:13.280 11:24:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:13.280 11:24:12 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:13.280 11:24:12 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:13.280 11:24:12 thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.280 ************************************ 00:08:13.280 START TEST thread_poller_perf 00:08:13.280 ************************************ 00:08:13.280 11:24:12 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:13.280 [2024-11-05 11:24:12.409277] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:13.280 [2024-11-05 11:24:12.409478] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59876 ] 00:08:13.539 [2024-11-05 11:24:12.583280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.539 [2024-11-05 11:24:12.697499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.539 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:14.928 [2024-11-05T11:24:14.202Z] ====================================== 00:08:14.928 [2024-11-05T11:24:14.202Z] busy:2301898242 (cyc) 00:08:14.928 [2024-11-05T11:24:14.202Z] total_run_count: 402000 00:08:14.928 [2024-11-05T11:24:14.202Z] tsc_hz: 2290000000 (cyc) 00:08:14.928 [2024-11-05T11:24:14.202Z] ====================================== 00:08:14.928 [2024-11-05T11:24:14.202Z] poller_cost: 5726 (cyc), 2500 (nsec) 00:08:14.928 00:08:14.928 real 0m1.571s 00:08:14.928 user 0m1.367s 00:08:14.928 sys 0m0.096s 00:08:14.928 11:24:13 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.928 11:24:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:14.928 ************************************ 00:08:14.928 END TEST thread_poller_perf 00:08:14.928 ************************************ 00:08:14.928 11:24:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:14.928 11:24:13 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:14.928 11:24:13 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.928 11:24:13 thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.928 ************************************ 00:08:14.928 START TEST thread_poller_perf 00:08:14.928 ************************************ 00:08:14.928 11:24:14 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:14.928 [2024-11-05 11:24:14.053042] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:14.928 [2024-11-05 11:24:14.053168] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59907 ] 00:08:15.199 [2024-11-05 11:24:14.227960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.199 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:15.199 [2024-11-05 11:24:14.341451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.579 [2024-11-05T11:24:15.853Z] ====================================== 00:08:16.579 [2024-11-05T11:24:15.853Z] busy:2293552050 (cyc) 00:08:16.579 [2024-11-05T11:24:15.853Z] total_run_count: 5174000 00:08:16.579 [2024-11-05T11:24:15.853Z] tsc_hz: 2290000000 (cyc) 00:08:16.579 [2024-11-05T11:24:15.853Z] ====================================== 00:08:16.579 [2024-11-05T11:24:15.853Z] poller_cost: 443 (cyc), 193 (nsec) 00:08:16.579 00:08:16.579 real 0m1.573s 00:08:16.579 user 0m1.374s 00:08:16.579 sys 0m0.092s 00:08:16.579 ************************************ 00:08:16.579 END TEST thread_poller_perf 00:08:16.579 ************************************ 00:08:16.579 11:24:15 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:16.579 11:24:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:16.579 11:24:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:16.579 ************************************ 00:08:16.579 END TEST thread 00:08:16.579 ************************************ 00:08:16.579 00:08:16.579 real 0m3.528s 00:08:16.579 user 0m2.919s 00:08:16.579 sys 0m0.403s 00:08:16.579 11:24:15 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:16.579 11:24:15 thread -- common/autotest_common.sh@10 -- # set +x 00:08:16.579 11:24:15 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:16.579 11:24:15 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:16.579 11:24:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:16.579 11:24:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:16.579 11:24:15 -- common/autotest_common.sh@10 -- # set +x 00:08:16.579 ************************************ 00:08:16.579 START TEST app_cmdline 00:08:16.579 ************************************ 00:08:16.579 11:24:15 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:16.579 * Looking for test storage... 00:08:16.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:16.579 11:24:15 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:16.579 11:24:15 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:08:16.579 11:24:15 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:16.839 11:24:15 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:16.839 11:24:15 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.839 11:24:15 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.839 11:24:15 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.840 11:24:15 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:16.840 11:24:15 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.840 11:24:15 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:16.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.840 --rc genhtml_branch_coverage=1 00:08:16.840 --rc genhtml_function_coverage=1 00:08:16.840 --rc genhtml_legend=1 00:08:16.840 --rc geninfo_all_blocks=1 00:08:16.840 --rc geninfo_unexecuted_blocks=1 00:08:16.840 00:08:16.840 ' 00:08:16.840 11:24:15 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:16.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.840 --rc genhtml_branch_coverage=1 00:08:16.840 --rc genhtml_function_coverage=1 00:08:16.840 --rc genhtml_legend=1 00:08:16.840 --rc geninfo_all_blocks=1 00:08:16.840 --rc geninfo_unexecuted_blocks=1 00:08:16.840 00:08:16.840 ' 00:08:16.840 11:24:15 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:16.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.840 --rc genhtml_branch_coverage=1 00:08:16.840 --rc genhtml_function_coverage=1 00:08:16.840 --rc genhtml_legend=1 00:08:16.840 --rc geninfo_all_blocks=1 00:08:16.840 --rc geninfo_unexecuted_blocks=1 00:08:16.840 00:08:16.840 ' 00:08:16.840 11:24:15 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:16.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.840 --rc genhtml_branch_coverage=1 00:08:16.840 --rc genhtml_function_coverage=1 00:08:16.840 --rc genhtml_legend=1 00:08:16.840 --rc geninfo_all_blocks=1 00:08:16.840 --rc geninfo_unexecuted_blocks=1 00:08:16.840 00:08:16.840 ' 00:08:16.840 11:24:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:16.840 11:24:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59996 00:08:16.840 11:24:15 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:16.840 11:24:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59996 00:08:16.840 11:24:15 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59996 ']' 00:08:16.840 11:24:15 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.840 11:24:15 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:16.840 11:24:15 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.840 11:24:15 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:16.840 11:24:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:16.840 [2024-11-05 11:24:16.029557] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:16.840 [2024-11-05 11:24:16.029669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59996 ] 00:08:17.100 [2024-11-05 11:24:16.204108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.100 [2024-11-05 11:24:16.322244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.039 11:24:17 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:18.039 11:24:17 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:08:18.039 11:24:17 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:18.298 { 00:08:18.298 "version": "SPDK v25.01-pre git sha1 1aeff8917", 00:08:18.298 "fields": { 00:08:18.298 "major": 25, 00:08:18.298 "minor": 1, 00:08:18.298 "patch": 0, 00:08:18.298 "suffix": "-pre", 00:08:18.298 "commit": "1aeff8917" 00:08:18.298 } 00:08:18.298 } 00:08:18.298 11:24:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:18.298 11:24:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:18.298 11:24:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:18.298 11:24:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:18.298 11:24:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:18.298 11:24:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:18.298 11:24:17 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.298 11:24:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:18.298 11:24:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:18.298 11:24:17 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.298 11:24:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:18.298 11:24:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:18.298 11:24:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.298 11:24:17 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:18.298 11:24:17 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.298 11:24:17 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.299 11:24:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.299 11:24:17 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.299 11:24:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.299 11:24:17 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.299 11:24:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.299 11:24:17 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.299 11:24:17 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:18.299 11:24:17 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.558 request: 00:08:18.558 { 00:08:18.558 "method": "env_dpdk_get_mem_stats", 00:08:18.558 "req_id": 1 00:08:18.558 } 00:08:18.558 Got JSON-RPC error response 00:08:18.558 response: 00:08:18.558 { 00:08:18.558 "code": -32601, 00:08:18.558 "message": "Method not found" 00:08:18.558 } 00:08:18.558 11:24:17 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:18.558 11:24:17 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.558 11:24:17 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:18.558 11:24:17 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.558 11:24:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59996 00:08:18.558 11:24:17 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59996 ']' 00:08:18.558 11:24:17 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59996 00:08:18.558 11:24:17 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:08:18.558 11:24:17 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:18.558 11:24:17 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59996 00:08:18.558 killing process with pid 59996 00:08:18.558 11:24:17 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:18.558 11:24:17 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:18.558 11:24:17 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59996' 00:08:18.558 11:24:17 app_cmdline -- common/autotest_common.sh@971 -- # kill 59996 00:08:18.558 11:24:17 app_cmdline -- common/autotest_common.sh@976 -- # wait 59996 00:08:21.096 00:08:21.096 real 0m4.384s 00:08:21.096 user 0m4.685s 00:08:21.096 sys 0m0.623s 00:08:21.096 11:24:20 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:21.096 ************************************ 00:08:21.096 END TEST app_cmdline 00:08:21.096 ************************************ 00:08:21.096 11:24:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:21.096 11:24:20 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:21.096 11:24:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:21.096 11:24:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:21.096 11:24:20 -- common/autotest_common.sh@10 -- # set +x 00:08:21.096 ************************************ 00:08:21.096 START TEST version 00:08:21.096 ************************************ 00:08:21.096 11:24:20 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:21.096 * Looking for test storage... 00:08:21.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:21.096 11:24:20 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:21.096 11:24:20 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:21.096 11:24:20 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:21.096 11:24:20 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:21.096 11:24:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.096 11:24:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.096 11:24:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.096 11:24:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.096 11:24:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.096 11:24:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.096 11:24:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.096 11:24:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.096 11:24:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.096 11:24:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.096 11:24:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.096 11:24:20 version -- scripts/common.sh@344 -- # case "$op" in 00:08:21.096 11:24:20 version -- scripts/common.sh@345 -- # : 1 00:08:21.096 11:24:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.096 11:24:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.096 11:24:20 version -- scripts/common.sh@365 -- # decimal 1 00:08:21.096 11:24:20 version -- scripts/common.sh@353 -- # local d=1 00:08:21.096 11:24:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.096 11:24:20 version -- scripts/common.sh@355 -- # echo 1 00:08:21.096 11:24:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.096 11:24:20 version -- scripts/common.sh@366 -- # decimal 2 00:08:21.096 11:24:20 version -- scripts/common.sh@353 -- # local d=2 00:08:21.096 11:24:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.096 11:24:20 version -- scripts/common.sh@355 -- # echo 2 00:08:21.096 11:24:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.096 11:24:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.096 11:24:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.096 11:24:20 version -- scripts/common.sh@368 -- # return 0 00:08:21.096 11:24:20 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.096 11:24:20 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:21.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.096 --rc genhtml_branch_coverage=1 00:08:21.096 --rc genhtml_function_coverage=1 00:08:21.096 --rc genhtml_legend=1 00:08:21.096 --rc geninfo_all_blocks=1 00:08:21.096 --rc geninfo_unexecuted_blocks=1 00:08:21.096 00:08:21.096 ' 00:08:21.096 11:24:20 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:21.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.096 --rc genhtml_branch_coverage=1 00:08:21.096 --rc genhtml_function_coverage=1 00:08:21.096 --rc genhtml_legend=1 00:08:21.096 --rc geninfo_all_blocks=1 00:08:21.096 --rc geninfo_unexecuted_blocks=1 00:08:21.096 00:08:21.096 ' 00:08:21.096 11:24:20 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:21.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.096 --rc genhtml_branch_coverage=1 00:08:21.096 --rc genhtml_function_coverage=1 00:08:21.096 --rc genhtml_legend=1 00:08:21.096 --rc geninfo_all_blocks=1 00:08:21.096 --rc geninfo_unexecuted_blocks=1 00:08:21.096 00:08:21.096 ' 00:08:21.096 11:24:20 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:21.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.096 --rc genhtml_branch_coverage=1 00:08:21.096 --rc genhtml_function_coverage=1 00:08:21.096 --rc genhtml_legend=1 00:08:21.096 --rc geninfo_all_blocks=1 00:08:21.096 --rc geninfo_unexecuted_blocks=1 00:08:21.096 00:08:21.096 ' 00:08:21.096 11:24:20 version -- app/version.sh@17 -- # get_header_version major 00:08:21.356 11:24:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.356 11:24:20 version -- app/version.sh@14 -- # cut -f2 00:08:21.356 11:24:20 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.356 11:24:20 version -- app/version.sh@17 -- # major=25 00:08:21.356 11:24:20 version -- app/version.sh@18 -- # get_header_version minor 00:08:21.356 11:24:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.356 11:24:20 version -- app/version.sh@14 -- # cut -f2 00:08:21.356 11:24:20 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.356 11:24:20 version -- app/version.sh@18 -- # minor=1 00:08:21.356 11:24:20 version -- app/version.sh@19 -- # get_header_version patch 00:08:21.356 11:24:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.356 11:24:20 version -- app/version.sh@14 -- # cut -f2 00:08:21.356 11:24:20 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.356 11:24:20 version -- app/version.sh@19 -- # patch=0 00:08:21.356 11:24:20 version -- app/version.sh@20 -- # get_header_version suffix 00:08:21.356 11:24:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.356 11:24:20 version -- app/version.sh@14 -- # cut -f2 00:08:21.356 11:24:20 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.356 11:24:20 version -- app/version.sh@20 -- # suffix=-pre 00:08:21.356 11:24:20 version -- app/version.sh@22 -- # version=25.1 00:08:21.356 11:24:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:21.356 11:24:20 version -- app/version.sh@28 -- # version=25.1rc0 00:08:21.356 11:24:20 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:21.356 11:24:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:21.356 11:24:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:21.356 11:24:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:21.356 00:08:21.356 real 0m0.309s 00:08:21.356 user 0m0.183s 00:08:21.356 sys 0m0.179s 00:08:21.356 11:24:20 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:21.356 11:24:20 version -- common/autotest_common.sh@10 -- # set +x 00:08:21.356 ************************************ 00:08:21.356 END TEST version 00:08:21.356 ************************************ 00:08:21.356 11:24:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:21.356 11:24:20 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:21.356 11:24:20 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:21.356 11:24:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:21.356 11:24:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:21.356 11:24:20 -- common/autotest_common.sh@10 -- # set +x 00:08:21.356 ************************************ 00:08:21.356 START TEST bdev_raid 00:08:21.356 ************************************ 00:08:21.356 11:24:20 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:21.356 * Looking for test storage... 00:08:21.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:21.616 11:24:20 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:21.616 11:24:20 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:08:21.616 11:24:20 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:21.616 11:24:20 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.616 11:24:20 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:21.616 11:24:20 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.616 11:24:20 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:21.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.616 --rc genhtml_branch_coverage=1 00:08:21.616 --rc genhtml_function_coverage=1 00:08:21.616 --rc genhtml_legend=1 00:08:21.616 --rc geninfo_all_blocks=1 00:08:21.616 --rc geninfo_unexecuted_blocks=1 00:08:21.616 00:08:21.616 ' 00:08:21.616 11:24:20 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:21.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.616 --rc genhtml_branch_coverage=1 00:08:21.616 --rc genhtml_function_coverage=1 00:08:21.616 --rc genhtml_legend=1 00:08:21.616 --rc geninfo_all_blocks=1 00:08:21.616 --rc geninfo_unexecuted_blocks=1 00:08:21.616 00:08:21.616 ' 00:08:21.616 11:24:20 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:21.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.616 --rc genhtml_branch_coverage=1 00:08:21.616 --rc genhtml_function_coverage=1 00:08:21.616 --rc genhtml_legend=1 00:08:21.616 --rc geninfo_all_blocks=1 00:08:21.616 --rc geninfo_unexecuted_blocks=1 00:08:21.616 00:08:21.616 ' 00:08:21.616 11:24:20 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:21.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.616 --rc genhtml_branch_coverage=1 00:08:21.616 --rc genhtml_function_coverage=1 00:08:21.616 --rc genhtml_legend=1 00:08:21.616 --rc geninfo_all_blocks=1 00:08:21.616 --rc geninfo_unexecuted_blocks=1 00:08:21.616 00:08:21.616 ' 00:08:21.616 11:24:20 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:21.616 11:24:20 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:21.616 11:24:20 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:21.616 11:24:20 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:21.616 11:24:20 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:21.616 11:24:20 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:21.616 11:24:20 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:21.616 11:24:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:21.616 11:24:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:21.616 11:24:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.616 ************************************ 00:08:21.616 START TEST raid1_resize_data_offset_test 00:08:21.616 ************************************ 00:08:21.616 11:24:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:08:21.616 11:24:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60188 00:08:21.616 11:24:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60188' 00:08:21.616 Process raid pid: 60188 00:08:21.616 11:24:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:21.616 11:24:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60188 00:08:21.616 11:24:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 60188 ']' 00:08:21.616 11:24:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.616 11:24:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:21.616 11:24:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.616 11:24:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:21.616 11:24:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.616 [2024-11-05 11:24:20.857635] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:21.617 [2024-11-05 11:24:20.857866] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.876 [2024-11-05 11:24:21.032450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.876 [2024-11-05 11:24:21.141686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.135 [2024-11-05 11:24:21.360219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.135 [2024-11-05 11:24:21.360325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.704 malloc0 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.704 malloc1 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.704 null0 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.704 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.704 [2024-11-05 11:24:21.889944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:22.704 [2024-11-05 11:24:21.891848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:22.704 [2024-11-05 11:24:21.891974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:22.704 [2024-11-05 11:24:21.892164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:22.704 [2024-11-05 11:24:21.892183] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:22.704 [2024-11-05 11:24:21.892497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:22.704 [2024-11-05 11:24:21.892672] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:22.704 [2024-11-05 11:24:21.892686] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:22.704 [2024-11-05 11:24:21.892860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.705 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.705 11:24:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:22.705 11:24:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.705 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.705 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.705 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.705 11:24:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:22.705 11:24:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:22.705 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.705 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.705 [2024-11-05 11:24:21.953826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:22.705 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.705 11:24:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:22.705 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.705 11:24:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.272 malloc2 00:08:23.272 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.272 11:24:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:23.272 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.272 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.272 [2024-11-05 11:24:22.494039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:23.272 [2024-11-05 11:24:22.510748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:23.272 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.273 [2024-11-05 11:24:22.512701] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:23.273 11:24:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:23.273 11:24:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.273 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.273 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.273 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.532 11:24:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:23.532 11:24:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60188 00:08:23.532 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 60188 ']' 00:08:23.532 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 60188 00:08:23.532 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:08:23.532 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:23.532 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60188 00:08:23.532 killing process with pid 60188 00:08:23.532 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:23.532 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:23.532 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60188' 00:08:23.532 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 60188 00:08:23.532 [2024-11-05 11:24:22.604040] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:23.532 11:24:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 60188 00:08:23.532 [2024-11-05 11:24:22.604780] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:23.532 [2024-11-05 11:24:22.604841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.532 [2024-11-05 11:24:22.604860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:23.532 [2024-11-05 11:24:22.643757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.532 [2024-11-05 11:24:22.644213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:23.532 [2024-11-05 11:24:22.644238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:25.440 [2024-11-05 11:24:24.509247] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.378 11:24:25 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:26.378 00:08:26.378 real 0m4.861s 00:08:26.378 user 0m4.794s 00:08:26.378 sys 0m0.522s 00:08:26.378 11:24:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:26.378 11:24:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.378 ************************************ 00:08:26.378 END TEST raid1_resize_data_offset_test 00:08:26.378 ************************************ 00:08:26.637 11:24:25 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:26.637 11:24:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:26.637 11:24:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:26.637 11:24:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.637 ************************************ 00:08:26.637 START TEST raid0_resize_superblock_test 00:08:26.637 ************************************ 00:08:26.637 11:24:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:08:26.637 11:24:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:26.637 11:24:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60273 00:08:26.637 11:24:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:26.637 Process raid pid: 60273 00:08:26.637 11:24:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60273' 00:08:26.637 11:24:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60273 00:08:26.637 11:24:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60273 ']' 00:08:26.637 11:24:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.637 11:24:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:26.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.637 11:24:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.637 11:24:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:26.637 11:24:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.637 [2024-11-05 11:24:25.780165] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:26.637 [2024-11-05 11:24:25.780276] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.899 [2024-11-05 11:24:25.951352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.899 [2024-11-05 11:24:26.063054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.158 [2024-11-05 11:24:26.267985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.158 [2024-11-05 11:24:26.268116] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.417 11:24:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:27.417 11:24:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:27.417 11:24:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:27.417 11:24:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.417 11:24:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.985 malloc0 00:08:27.985 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.985 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:27.985 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.985 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.985 [2024-11-05 11:24:27.142537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:27.985 [2024-11-05 11:24:27.142605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.985 [2024-11-05 11:24:27.142631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:27.985 [2024-11-05 11:24:27.142643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.985 [2024-11-05 11:24:27.144941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.985 [2024-11-05 11:24:27.144984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:27.985 pt0 00:08:27.985 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.985 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:27.985 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.985 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.985 36f291b9-f71f-4070-ae9b-13b8d0c678ac 00:08:27.986 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.986 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:27.986 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.986 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.986 fbc42d2b-b40e-4b4c-a98f-ee2d72d020ff 00:08:27.986 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.986 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:27.986 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.986 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.245 1323d53e-d694-4913-8d2d-f99e8e888e76 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.245 [2024-11-05 11:24:27.277316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fbc42d2b-b40e-4b4c-a98f-ee2d72d020ff is claimed 00:08:28.245 [2024-11-05 11:24:27.277403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1323d53e-d694-4913-8d2d-f99e8e888e76 is claimed 00:08:28.245 [2024-11-05 11:24:27.277530] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:28.245 [2024-11-05 11:24:27.277545] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:28.245 [2024-11-05 11:24:27.277785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:28.245 [2024-11-05 11:24:27.277980] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:28.245 [2024-11-05 11:24:27.277990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:28.245 [2024-11-05 11:24:27.278168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:28.245 [2024-11-05 11:24:27.381372] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.245 [2024-11-05 11:24:27.405317] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:28.245 [2024-11-05 11:24:27.405385] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'fbc42d2b-b40e-4b4c-a98f-ee2d72d020ff' was resized: old size 131072, new size 204800 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.245 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.245 [2024-11-05 11:24:27.417200] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:28.245 [2024-11-05 11:24:27.417263] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '1323d53e-d694-4913-8d2d-f99e8e888e76' was resized: old size 131072, new size 204800 00:08:28.246 [2024-11-05 11:24:27.417318] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:28.246 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.246 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:28.246 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:28.246 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.246 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.246 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.246 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:28.246 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:28.246 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:28.246 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.246 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.246 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.246 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.505 [2024-11-05 11:24:27.525104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.505 [2024-11-05 11:24:27.572803] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:28.505 [2024-11-05 11:24:27.572871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:28.505 [2024-11-05 11:24:27.572882] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.505 [2024-11-05 11:24:27.572899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:28.505 [2024-11-05 11:24:27.573012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.505 [2024-11-05 11:24:27.573044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.505 [2024-11-05 11:24:27.573054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.505 [2024-11-05 11:24:27.584724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:28.505 [2024-11-05 11:24:27.584780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.505 [2024-11-05 11:24:27.584801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:28.505 [2024-11-05 11:24:27.584811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.505 [2024-11-05 11:24:27.586743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.505 [2024-11-05 11:24:27.586780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:28.505 [2024-11-05 11:24:27.588456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev fbc42d2b-b40e-4b4c-a98f-ee2d72d020ff 00:08:28.505 [2024-11-05 11:24:27.588522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fbc42d2b-b40e-4b4c-a98f-ee2d72d020ff is claimed 00:08:28.505 [2024-11-05 11:24:27.588627] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 1323d53e-d694-4913-8d2d-f99e8e888e76 00:08:28.505 [2024-11-05 11:24:27.588647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1323d53e-d694-4913-8d2d-f99e8e888e76 is claimed 00:08:28.505 [2024-11-05 11:24:27.588761] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 1323d53e-d694-4913-8d2d-f99e8e888e76 (2) smaller than existing raid bdev Raid (3) 00:08:28.505 [2024-11-05 11:24:27.588782] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev fbc42d2b-b40e-4b4c-a98f-ee2d72d020ff: File exists 00:08:28.505 [2024-11-05 11:24:27.588820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:28.505 [2024-11-05 11:24:27.588831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:28.505 [2024-11-05 11:24:27.589057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:28.505 [2024-11-05 11:24:27.589217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:28.505 [2024-11-05 11:24:27.589232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:28.505 [2024-11-05 11:24:27.589409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.505 pt0 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:28.505 [2024-11-05 11:24:27.609010] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60273 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60273 ']' 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60273 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60273 00:08:28.505 killing process with pid 60273 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60273' 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60273 00:08:28.505 [2024-11-05 11:24:27.675439] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.505 [2024-11-05 11:24:27.675498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.505 [2024-11-05 11:24:27.675534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.505 [2024-11-05 11:24:27.675542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:28.505 11:24:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60273 00:08:29.883 [2024-11-05 11:24:29.063454] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.304 11:24:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:31.304 00:08:31.304 real 0m4.466s 00:08:31.304 user 0m4.627s 00:08:31.304 sys 0m0.547s 00:08:31.304 ************************************ 00:08:31.304 END TEST raid0_resize_superblock_test 00:08:31.304 ************************************ 00:08:31.304 11:24:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:31.304 11:24:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.304 11:24:30 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:31.304 11:24:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:31.304 11:24:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:31.304 11:24:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.304 ************************************ 00:08:31.304 START TEST raid1_resize_superblock_test 00:08:31.304 ************************************ 00:08:31.304 11:24:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:08:31.304 11:24:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:31.304 11:24:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60367 00:08:31.304 Process raid pid: 60367 00:08:31.304 11:24:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:31.304 11:24:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60367' 00:08:31.304 11:24:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60367 00:08:31.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.304 11:24:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60367 ']' 00:08:31.304 11:24:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.304 11:24:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:31.304 11:24:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.304 11:24:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:31.304 11:24:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.304 [2024-11-05 11:24:30.315785] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:31.304 [2024-11-05 11:24:30.315984] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.304 [2024-11-05 11:24:30.492356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.564 [2024-11-05 11:24:30.604665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.564 [2024-11-05 11:24:30.801355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.564 [2024-11-05 11:24:30.801397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.133 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:32.133 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:32.133 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:32.133 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.133 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.702 malloc0 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.702 [2024-11-05 11:24:31.674966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:32.702 [2024-11-05 11:24:31.675031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.702 [2024-11-05 11:24:31.675057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:32.702 [2024-11-05 11:24:31.675068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.702 [2024-11-05 11:24:31.677258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.702 [2024-11-05 11:24:31.677333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:32.702 pt0 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.702 eeb8cd50-ace1-49ae-8012-8fa6167befa7 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.702 698ff402-4644-472f-b589-a51195ab3482 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.702 977a9d8c-b378-4014-ab10-c384be4fb986 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.702 [2024-11-05 11:24:31.796353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 698ff402-4644-472f-b589-a51195ab3482 is claimed 00:08:32.702 [2024-11-05 11:24:31.796498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 977a9d8c-b378-4014-ab10-c384be4fb986 is claimed 00:08:32.702 [2024-11-05 11:24:31.796696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:32.702 [2024-11-05 11:24:31.796752] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:32.702 [2024-11-05 11:24:31.797058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:32.702 [2024-11-05 11:24:31.797306] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:32.702 [2024-11-05 11:24:31.797323] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:32.702 [2024-11-05 11:24:31.797482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.702 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.703 [2024-11-05 11:24:31.908547] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.703 [2024-11-05 11:24:31.944319] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:32.703 [2024-11-05 11:24:31.944400] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '698ff402-4644-472f-b589-a51195ab3482' was resized: old size 131072, new size 204800 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.703 [2024-11-05 11:24:31.952194] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:32.703 [2024-11-05 11:24:31.952217] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '977a9d8c-b378-4014-ab10-c384be4fb986' was resized: old size 131072, new size 204800 00:08:32.703 [2024-11-05 11:24:31.952244] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:32.703 11:24:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:32.963 [2024-11-05 11:24:32.040170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.963 [2024-11-05 11:24:32.087857] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:32.963 [2024-11-05 11:24:32.087977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:32.963 [2024-11-05 11:24:32.088009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:32.963 [2024-11-05 11:24:32.088200] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.963 [2024-11-05 11:24:32.088417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.963 [2024-11-05 11:24:32.088480] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.963 [2024-11-05 11:24:32.088493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.963 [2024-11-05 11:24:32.095754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:32.963 [2024-11-05 11:24:32.095814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.963 [2024-11-05 11:24:32.095837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:32.963 [2024-11-05 11:24:32.095848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.963 [2024-11-05 11:24:32.097997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.963 [2024-11-05 11:24:32.098037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:32.963 [2024-11-05 11:24:32.099724] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 698ff402-4644-472f-b589-a51195ab3482 00:08:32.963 [2024-11-05 11:24:32.099794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 698ff402-4644-472f-b589-a51195ab3482 is claimed 00:08:32.963 [2024-11-05 11:24:32.099910] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 977a9d8c-b378-4014-ab10-c384be4fb986 00:08:32.963 [2024-11-05 11:24:32.099931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 977a9d8c-b378-4014-ab10-c384be4fb986 is claimed 00:08:32.963 [2024-11-05 11:24:32.100096] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 977a9d8c-b378-4014-ab10-c384be4fb986 (2) smaller than existing raid bdev Raid (3) 00:08:32.963 [2024-11-05 11:24:32.100117] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 698ff402-4644-472f-b589-a51195ab3482: File exists 00:08:32.963 [2024-11-05 11:24:32.100165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:32.963 [2024-11-05 11:24:32.100178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:32.963 pt0 00:08:32.963 [2024-11-05 11:24:32.100450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.963 [2024-11-05 11:24:32.100620] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:32.963 [2024-11-05 11:24:32.100631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:32.963 [2024-11-05 11:24:32.100802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.963 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.963 [2024-11-05 11:24:32.115994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.964 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.964 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:32.964 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:32.964 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:32.964 11:24:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60367 00:08:32.964 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60367 ']' 00:08:32.964 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60367 00:08:32.964 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:32.964 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:32.964 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60367 00:08:32.964 killing process with pid 60367 00:08:32.964 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:32.964 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:32.964 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60367' 00:08:32.964 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60367 00:08:32.964 11:24:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60367 00:08:32.964 [2024-11-05 11:24:32.170724] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.964 [2024-11-05 11:24:32.170811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.964 [2024-11-05 11:24:32.170877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.964 [2024-11-05 11:24:32.170891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:34.342 [2024-11-05 11:24:33.593005] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.733 11:24:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:35.733 00:08:35.733 real 0m4.459s 00:08:35.733 user 0m4.615s 00:08:35.733 sys 0m0.526s 00:08:35.733 11:24:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:35.733 11:24:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.733 ************************************ 00:08:35.733 END TEST raid1_resize_superblock_test 00:08:35.733 ************************************ 00:08:35.733 11:24:34 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:35.733 11:24:34 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:35.733 11:24:34 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:35.733 11:24:34 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:35.733 11:24:34 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:35.733 11:24:34 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:35.733 11:24:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:35.733 11:24:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:35.733 11:24:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.733 ************************************ 00:08:35.733 START TEST raid_function_test_raid0 00:08:35.733 ************************************ 00:08:35.733 11:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:08:35.733 11:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:35.733 11:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:35.733 11:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:35.733 11:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60475 00:08:35.733 11:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:35.733 11:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60475' 00:08:35.733 Process raid pid: 60475 00:08:35.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.733 11:24:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60475 00:08:35.733 11:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 60475 ']' 00:08:35.733 11:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.733 11:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:35.733 11:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.733 11:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:35.734 11:24:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:35.734 [2024-11-05 11:24:34.865671] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:35.734 [2024-11-05 11:24:34.865791] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.993 [2024-11-05 11:24:35.043321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.993 [2024-11-05 11:24:35.159499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.253 [2024-11-05 11:24:35.356112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.253 [2024-11-05 11:24:35.356272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.513 11:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:36.513 11:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:08:36.513 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:36.513 11:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.513 11:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:36.513 Base_1 00:08:36.513 11:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.513 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:36.513 11:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.513 11:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:36.773 Base_2 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:36.773 [2024-11-05 11:24:35.804736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:36.773 [2024-11-05 11:24:35.806545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:36.773 [2024-11-05 11:24:35.806657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:36.773 [2024-11-05 11:24:35.806696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:36.773 [2024-11-05 11:24:35.806972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:36.773 [2024-11-05 11:24:35.807191] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:36.773 [2024-11-05 11:24:35.807238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:36.773 [2024-11-05 11:24:35.807443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:36.773 11:24:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:37.033 [2024-11-05 11:24:36.056395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:37.033 /dev/nbd0 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.033 1+0 records in 00:08:37.033 1+0 records out 00:08:37.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347188 s, 11.8 MB/s 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:37.033 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:37.292 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:37.292 { 00:08:37.292 "nbd_device": "/dev/nbd0", 00:08:37.292 "bdev_name": "raid" 00:08:37.292 } 00:08:37.292 ]' 00:08:37.292 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:37.292 { 00:08:37.292 "nbd_device": "/dev/nbd0", 00:08:37.292 "bdev_name": "raid" 00:08:37.292 } 00:08:37.292 ]' 00:08:37.292 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:37.292 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:37.292 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:37.292 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:37.292 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:08:37.292 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:37.293 4096+0 records in 00:08:37.293 4096+0 records out 00:08:37.293 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0350725 s, 59.8 MB/s 00:08:37.293 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:37.553 4096+0 records in 00:08:37.553 4096+0 records out 00:08:37.553 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.211432 s, 9.9 MB/s 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:37.553 128+0 records in 00:08:37.553 128+0 records out 00:08:37.553 65536 bytes (66 kB, 64 KiB) copied, 0.000466455 s, 140 MB/s 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:37.553 2035+0 records in 00:08:37.553 2035+0 records out 00:08:37.553 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00871899 s, 120 MB/s 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:37.553 456+0 records in 00:08:37.553 456+0 records out 00:08:37.553 233472 bytes (233 kB, 228 KiB) copied, 0.00184727 s, 126 MB/s 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:37.553 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:37.814 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:37.814 [2024-11-05 11:24:36.984178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.814 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:37.814 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:37.814 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.814 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.814 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:37.814 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:08:37.814 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.814 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:37.814 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:37.814 11:24:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60475 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 60475 ']' 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 60475 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60475 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:38.074 killing process with pid 60475 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60475' 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 60475 00:08:38.074 [2024-11-05 11:24:37.291926] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.074 [2024-11-05 11:24:37.292035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.074 [2024-11-05 11:24:37.292085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.074 11:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 60475 00:08:38.074 [2024-11-05 11:24:37.292100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:38.333 [2024-11-05 11:24:37.500749] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.715 ************************************ 00:08:39.715 END TEST raid_function_test_raid0 00:08:39.715 ************************************ 00:08:39.715 11:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:39.715 00:08:39.715 real 0m3.829s 00:08:39.715 user 0m4.437s 00:08:39.715 sys 0m0.940s 00:08:39.715 11:24:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:39.715 11:24:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:39.715 11:24:38 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:39.715 11:24:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:39.715 11:24:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:39.715 11:24:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.715 ************************************ 00:08:39.715 START TEST raid_function_test_concat 00:08:39.715 ************************************ 00:08:39.715 11:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:08:39.715 11:24:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:39.715 11:24:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:39.715 11:24:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:39.715 11:24:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60593 00:08:39.715 Process raid pid: 60593 00:08:39.715 11:24:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:39.715 11:24:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60593' 00:08:39.715 11:24:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60593 00:08:39.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.715 11:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 60593 ']' 00:08:39.715 11:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.715 11:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:39.715 11:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.715 11:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:39.715 11:24:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:39.715 [2024-11-05 11:24:38.763405] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:39.715 [2024-11-05 11:24:38.763538] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.715 [2024-11-05 11:24:38.922669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.975 [2024-11-05 11:24:39.038684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.975 [2024-11-05 11:24:39.238651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.975 [2024-11-05 11:24:39.238697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:40.552 Base_1 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:40.552 Base_2 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:40.552 [2024-11-05 11:24:39.693979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:40.552 [2024-11-05 11:24:39.695961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:40.552 [2024-11-05 11:24:39.696056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:40.552 [2024-11-05 11:24:39.696073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:40.552 [2024-11-05 11:24:39.696347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:40.552 [2024-11-05 11:24:39.696521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:40.552 [2024-11-05 11:24:39.696538] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:40.552 [2024-11-05 11:24:39.696711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:40.552 11:24:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:40.815 [2024-11-05 11:24:39.929626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:40.815 /dev/nbd0 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:40.815 1+0 records in 00:08:40.815 1+0 records out 00:08:40.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371974 s, 11.0 MB/s 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:40.815 11:24:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:41.073 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:41.073 { 00:08:41.074 "nbd_device": "/dev/nbd0", 00:08:41.074 "bdev_name": "raid" 00:08:41.074 } 00:08:41.074 ]' 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:41.074 { 00:08:41.074 "nbd_device": "/dev/nbd0", 00:08:41.074 "bdev_name": "raid" 00:08:41.074 } 00:08:41.074 ]' 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:41.074 4096+0 records in 00:08:41.074 4096+0 records out 00:08:41.074 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0341042 s, 61.5 MB/s 00:08:41.074 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:41.333 4096+0 records in 00:08:41.333 4096+0 records out 00:08:41.333 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.195663 s, 10.7 MB/s 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:41.333 128+0 records in 00:08:41.333 128+0 records out 00:08:41.333 65536 bytes (66 kB, 64 KiB) copied, 0.000991816 s, 66.1 MB/s 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:41.333 2035+0 records in 00:08:41.333 2035+0 records out 00:08:41.333 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0134432 s, 77.5 MB/s 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:41.333 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:41.333 456+0 records in 00:08:41.333 456+0 records out 00:08:41.593 233472 bytes (233 kB, 228 KiB) copied, 0.00390804 s, 59.7 MB/s 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:41.593 [2024-11-05 11:24:40.839573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:41.593 11:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:41.852 11:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:41.852 11:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:41.852 11:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:41.852 11:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:41.852 11:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:41.852 11:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:41.852 11:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:41.852 11:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:41.852 11:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:41.852 11:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:41.852 11:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:41.852 11:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60593 00:08:41.852 11:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 60593 ']' 00:08:41.852 11:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 60593 00:08:42.112 11:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:08:42.112 11:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:42.112 11:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60593 00:08:42.112 killing process with pid 60593 00:08:42.112 11:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:42.112 11:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:42.112 11:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60593' 00:08:42.112 11:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 60593 00:08:42.112 11:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 60593 00:08:42.112 [2024-11-05 11:24:41.167598] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.112 [2024-11-05 11:24:41.167722] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.112 [2024-11-05 11:24:41.167784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.112 [2024-11-05 11:24:41.167798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:42.112 [2024-11-05 11:24:41.372649] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.493 ************************************ 00:08:43.493 END TEST raid_function_test_concat 00:08:43.493 ************************************ 00:08:43.493 11:24:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:43.493 00:08:43.493 real 0m3.789s 00:08:43.493 user 0m4.429s 00:08:43.493 sys 0m0.926s 00:08:43.493 11:24:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:43.493 11:24:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:43.493 11:24:42 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:43.493 11:24:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:43.493 11:24:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:43.493 11:24:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.493 ************************************ 00:08:43.493 START TEST raid0_resize_test 00:08:43.493 ************************************ 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60721 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60721' 00:08:43.493 Process raid pid: 60721 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60721 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60721 ']' 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:43.493 11:24:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.493 [2024-11-05 11:24:42.611060] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:43.493 [2024-11-05 11:24:42.611315] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.753 [2024-11-05 11:24:42.786184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.753 [2024-11-05 11:24:42.900476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.012 [2024-11-05 11:24:43.108832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.012 [2024-11-05 11:24:43.108962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.272 Base_1 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.272 Base_2 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.272 [2024-11-05 11:24:43.507268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:44.272 [2024-11-05 11:24:43.509062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:44.272 [2024-11-05 11:24:43.509120] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:44.272 [2024-11-05 11:24:43.509143] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:44.272 [2024-11-05 11:24:43.509386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:44.272 [2024-11-05 11:24:43.509527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:44.272 [2024-11-05 11:24:43.509537] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:44.272 [2024-11-05 11:24:43.509690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.272 [2024-11-05 11:24:43.519244] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:44.272 [2024-11-05 11:24:43.519309] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:44.272 true 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:44.272 [2024-11-05 11:24:43.531425] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.272 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.532 [2024-11-05 11:24:43.579184] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:44.532 [2024-11-05 11:24:43.579208] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:44.532 [2024-11-05 11:24:43.579239] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:44.532 true 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.532 [2024-11-05 11:24:43.595312] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60721 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60721 ']' 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 60721 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60721 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60721' 00:08:44.532 killing process with pid 60721 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 60721 00:08:44.532 [2024-11-05 11:24:43.677792] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.532 [2024-11-05 11:24:43.677939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.532 11:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 60721 00:08:44.532 [2024-11-05 11:24:43.678017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.532 [2024-11-05 11:24:43.678028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:44.532 [2024-11-05 11:24:43.695068] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:45.912 11:24:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:45.912 00:08:45.912 real 0m2.268s 00:08:45.912 user 0m2.429s 00:08:45.912 sys 0m0.346s 00:08:45.912 11:24:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:45.912 11:24:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.912 ************************************ 00:08:45.912 END TEST raid0_resize_test 00:08:45.912 ************************************ 00:08:45.912 11:24:44 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:45.912 11:24:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:45.912 11:24:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:45.912 11:24:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.912 ************************************ 00:08:45.912 START TEST raid1_resize_test 00:08:45.912 ************************************ 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60777 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60777' 00:08:45.912 Process raid pid: 60777 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60777 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60777 ']' 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:45.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:45.912 11:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.913 [2024-11-05 11:24:44.950871] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:45.913 [2024-11-05 11:24:44.950981] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.913 [2024-11-05 11:24:45.127029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.172 [2024-11-05 11:24:45.245344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.432 [2024-11-05 11:24:45.456225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.432 [2024-11-05 11:24:45.456265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.692 Base_1 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.692 Base_2 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.692 [2024-11-05 11:24:45.804998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:46.692 [2024-11-05 11:24:45.806791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:46.692 [2024-11-05 11:24:45.806901] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:46.692 [2024-11-05 11:24:45.806918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:46.692 [2024-11-05 11:24:45.807198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:46.692 [2024-11-05 11:24:45.807346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:46.692 [2024-11-05 11:24:45.807356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:46.692 [2024-11-05 11:24:45.807514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.692 [2024-11-05 11:24:45.816962] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:46.692 [2024-11-05 11:24:45.816991] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:46.692 true 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.692 [2024-11-05 11:24:45.833089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.692 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.692 [2024-11-05 11:24:45.880835] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:46.692 [2024-11-05 11:24:45.880897] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:46.693 [2024-11-05 11:24:45.880972] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:46.693 true 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:46.693 [2024-11-05 11:24:45.892996] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60777 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60777 ']' 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 60777 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:46.693 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60777 00:08:46.953 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:46.953 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:46.953 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60777' 00:08:46.953 killing process with pid 60777 00:08:46.953 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 60777 00:08:46.953 [2024-11-05 11:24:45.979849] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.953 [2024-11-05 11:24:45.980001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.953 11:24:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 60777 00:08:46.953 [2024-11-05 11:24:45.980592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.953 [2024-11-05 11:24:45.980671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:46.953 [2024-11-05 11:24:45.999053] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.918 11:24:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:47.918 00:08:47.918 real 0m2.230s 00:08:47.918 user 0m2.365s 00:08:47.918 sys 0m0.330s 00:08:47.918 11:24:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:47.918 11:24:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.918 ************************************ 00:08:47.918 END TEST raid1_resize_test 00:08:47.918 ************************************ 00:08:47.918 11:24:47 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:47.918 11:24:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:47.918 11:24:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:47.918 11:24:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:47.918 11:24:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:47.918 11:24:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.918 ************************************ 00:08:47.918 START TEST raid_state_function_test 00:08:47.918 ************************************ 00:08:47.918 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:08:47.918 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:47.918 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:47.918 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:47.918 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:47.918 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:47.918 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.918 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:47.918 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:47.919 Process raid pid: 60840 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60840 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60840' 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60840 00:08:47.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60840 ']' 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:47.919 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.178 [2024-11-05 11:24:47.254407] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:48.178 [2024-11-05 11:24:47.254609] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.178 [2024-11-05 11:24:47.430467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.437 [2024-11-05 11:24:47.547330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.697 [2024-11-05 11:24:47.747444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.697 [2024-11-05 11:24:47.747572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.957 [2024-11-05 11:24:48.117053] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.957 [2024-11-05 11:24:48.117173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.957 [2024-11-05 11:24:48.117222] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.957 [2024-11-05 11:24:48.117247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.957 "name": "Existed_Raid", 00:08:48.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.957 "strip_size_kb": 64, 00:08:48.957 "state": "configuring", 00:08:48.957 "raid_level": "raid0", 00:08:48.957 "superblock": false, 00:08:48.957 "num_base_bdevs": 2, 00:08:48.957 "num_base_bdevs_discovered": 0, 00:08:48.957 "num_base_bdevs_operational": 2, 00:08:48.957 "base_bdevs_list": [ 00:08:48.957 { 00:08:48.957 "name": "BaseBdev1", 00:08:48.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.957 "is_configured": false, 00:08:48.957 "data_offset": 0, 00:08:48.957 "data_size": 0 00:08:48.957 }, 00:08:48.957 { 00:08:48.957 "name": "BaseBdev2", 00:08:48.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.957 "is_configured": false, 00:08:48.957 "data_offset": 0, 00:08:48.957 "data_size": 0 00:08:48.957 } 00:08:48.957 ] 00:08:48.957 }' 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.957 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.528 [2024-11-05 11:24:48.556282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.528 [2024-11-05 11:24:48.556357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.528 [2024-11-05 11:24:48.564273] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.528 [2024-11-05 11:24:48.564351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.528 [2024-11-05 11:24:48.564380] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.528 [2024-11-05 11:24:48.564405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.528 [2024-11-05 11:24:48.606481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.528 BaseBdev1 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.528 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.528 [ 00:08:49.528 { 00:08:49.528 "name": "BaseBdev1", 00:08:49.528 "aliases": [ 00:08:49.528 "baacee5f-89bc-4cb1-a02c-102fb6ba8caf" 00:08:49.528 ], 00:08:49.528 "product_name": "Malloc disk", 00:08:49.528 "block_size": 512, 00:08:49.528 "num_blocks": 65536, 00:08:49.528 "uuid": "baacee5f-89bc-4cb1-a02c-102fb6ba8caf", 00:08:49.528 "assigned_rate_limits": { 00:08:49.528 "rw_ios_per_sec": 0, 00:08:49.528 "rw_mbytes_per_sec": 0, 00:08:49.528 "r_mbytes_per_sec": 0, 00:08:49.528 "w_mbytes_per_sec": 0 00:08:49.528 }, 00:08:49.528 "claimed": true, 00:08:49.528 "claim_type": "exclusive_write", 00:08:49.528 "zoned": false, 00:08:49.528 "supported_io_types": { 00:08:49.528 "read": true, 00:08:49.528 "write": true, 00:08:49.528 "unmap": true, 00:08:49.528 "flush": true, 00:08:49.528 "reset": true, 00:08:49.528 "nvme_admin": false, 00:08:49.528 "nvme_io": false, 00:08:49.528 "nvme_io_md": false, 00:08:49.528 "write_zeroes": true, 00:08:49.528 "zcopy": true, 00:08:49.528 "get_zone_info": false, 00:08:49.528 "zone_management": false, 00:08:49.528 "zone_append": false, 00:08:49.528 "compare": false, 00:08:49.528 "compare_and_write": false, 00:08:49.528 "abort": true, 00:08:49.528 "seek_hole": false, 00:08:49.528 "seek_data": false, 00:08:49.528 "copy": true, 00:08:49.528 "nvme_iov_md": false 00:08:49.528 }, 00:08:49.528 "memory_domains": [ 00:08:49.528 { 00:08:49.528 "dma_device_id": "system", 00:08:49.528 "dma_device_type": 1 00:08:49.528 }, 00:08:49.528 { 00:08:49.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.528 "dma_device_type": 2 00:08:49.528 } 00:08:49.528 ], 00:08:49.529 "driver_specific": {} 00:08:49.529 } 00:08:49.529 ] 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.529 "name": "Existed_Raid", 00:08:49.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.529 "strip_size_kb": 64, 00:08:49.529 "state": "configuring", 00:08:49.529 "raid_level": "raid0", 00:08:49.529 "superblock": false, 00:08:49.529 "num_base_bdevs": 2, 00:08:49.529 "num_base_bdevs_discovered": 1, 00:08:49.529 "num_base_bdevs_operational": 2, 00:08:49.529 "base_bdevs_list": [ 00:08:49.529 { 00:08:49.529 "name": "BaseBdev1", 00:08:49.529 "uuid": "baacee5f-89bc-4cb1-a02c-102fb6ba8caf", 00:08:49.529 "is_configured": true, 00:08:49.529 "data_offset": 0, 00:08:49.529 "data_size": 65536 00:08:49.529 }, 00:08:49.529 { 00:08:49.529 "name": "BaseBdev2", 00:08:49.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.529 "is_configured": false, 00:08:49.529 "data_offset": 0, 00:08:49.529 "data_size": 0 00:08:49.529 } 00:08:49.529 ] 00:08:49.529 }' 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.529 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.099 [2024-11-05 11:24:49.081731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.099 [2024-11-05 11:24:49.081839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.099 [2024-11-05 11:24:49.089729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.099 [2024-11-05 11:24:49.091652] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:50.099 [2024-11-05 11:24:49.091734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.099 "name": "Existed_Raid", 00:08:50.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.099 "strip_size_kb": 64, 00:08:50.099 "state": "configuring", 00:08:50.099 "raid_level": "raid0", 00:08:50.099 "superblock": false, 00:08:50.099 "num_base_bdevs": 2, 00:08:50.099 "num_base_bdevs_discovered": 1, 00:08:50.099 "num_base_bdevs_operational": 2, 00:08:50.099 "base_bdevs_list": [ 00:08:50.099 { 00:08:50.099 "name": "BaseBdev1", 00:08:50.099 "uuid": "baacee5f-89bc-4cb1-a02c-102fb6ba8caf", 00:08:50.099 "is_configured": true, 00:08:50.099 "data_offset": 0, 00:08:50.099 "data_size": 65536 00:08:50.099 }, 00:08:50.099 { 00:08:50.099 "name": "BaseBdev2", 00:08:50.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.099 "is_configured": false, 00:08:50.099 "data_offset": 0, 00:08:50.099 "data_size": 0 00:08:50.099 } 00:08:50.099 ] 00:08:50.099 }' 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.099 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.359 [2024-11-05 11:24:49.574788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.359 [2024-11-05 11:24:49.574894] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:50.359 [2024-11-05 11:24:49.574910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:50.359 [2024-11-05 11:24:49.575226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:50.359 [2024-11-05 11:24:49.575404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:50.359 [2024-11-05 11:24:49.575420] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:50.359 [2024-11-05 11:24:49.575711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.359 BaseBdev2 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.359 [ 00:08:50.359 { 00:08:50.359 "name": "BaseBdev2", 00:08:50.359 "aliases": [ 00:08:50.359 "636d09c0-654e-4baa-bb6c-16b61dc3717a" 00:08:50.359 ], 00:08:50.359 "product_name": "Malloc disk", 00:08:50.359 "block_size": 512, 00:08:50.359 "num_blocks": 65536, 00:08:50.359 "uuid": "636d09c0-654e-4baa-bb6c-16b61dc3717a", 00:08:50.359 "assigned_rate_limits": { 00:08:50.359 "rw_ios_per_sec": 0, 00:08:50.359 "rw_mbytes_per_sec": 0, 00:08:50.359 "r_mbytes_per_sec": 0, 00:08:50.359 "w_mbytes_per_sec": 0 00:08:50.359 }, 00:08:50.359 "claimed": true, 00:08:50.359 "claim_type": "exclusive_write", 00:08:50.359 "zoned": false, 00:08:50.359 "supported_io_types": { 00:08:50.359 "read": true, 00:08:50.359 "write": true, 00:08:50.359 "unmap": true, 00:08:50.359 "flush": true, 00:08:50.359 "reset": true, 00:08:50.359 "nvme_admin": false, 00:08:50.359 "nvme_io": false, 00:08:50.359 "nvme_io_md": false, 00:08:50.359 "write_zeroes": true, 00:08:50.359 "zcopy": true, 00:08:50.359 "get_zone_info": false, 00:08:50.359 "zone_management": false, 00:08:50.359 "zone_append": false, 00:08:50.359 "compare": false, 00:08:50.359 "compare_and_write": false, 00:08:50.359 "abort": true, 00:08:50.359 "seek_hole": false, 00:08:50.359 "seek_data": false, 00:08:50.359 "copy": true, 00:08:50.359 "nvme_iov_md": false 00:08:50.359 }, 00:08:50.359 "memory_domains": [ 00:08:50.359 { 00:08:50.359 "dma_device_id": "system", 00:08:50.359 "dma_device_type": 1 00:08:50.359 }, 00:08:50.359 { 00:08:50.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.359 "dma_device_type": 2 00:08:50.359 } 00:08:50.359 ], 00:08:50.359 "driver_specific": {} 00:08:50.359 } 00:08:50.359 ] 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.359 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.617 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.617 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.617 "name": "Existed_Raid", 00:08:50.617 "uuid": "512ccef0-9c8b-4cec-bcad-ce9f0dd47f53", 00:08:50.617 "strip_size_kb": 64, 00:08:50.617 "state": "online", 00:08:50.617 "raid_level": "raid0", 00:08:50.617 "superblock": false, 00:08:50.617 "num_base_bdevs": 2, 00:08:50.617 "num_base_bdevs_discovered": 2, 00:08:50.617 "num_base_bdevs_operational": 2, 00:08:50.617 "base_bdevs_list": [ 00:08:50.617 { 00:08:50.617 "name": "BaseBdev1", 00:08:50.617 "uuid": "baacee5f-89bc-4cb1-a02c-102fb6ba8caf", 00:08:50.617 "is_configured": true, 00:08:50.617 "data_offset": 0, 00:08:50.617 "data_size": 65536 00:08:50.617 }, 00:08:50.617 { 00:08:50.617 "name": "BaseBdev2", 00:08:50.617 "uuid": "636d09c0-654e-4baa-bb6c-16b61dc3717a", 00:08:50.617 "is_configured": true, 00:08:50.617 "data_offset": 0, 00:08:50.617 "data_size": 65536 00:08:50.617 } 00:08:50.617 ] 00:08:50.617 }' 00:08:50.617 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.617 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.876 [2024-11-05 11:24:50.014359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.876 "name": "Existed_Raid", 00:08:50.876 "aliases": [ 00:08:50.876 "512ccef0-9c8b-4cec-bcad-ce9f0dd47f53" 00:08:50.876 ], 00:08:50.876 "product_name": "Raid Volume", 00:08:50.876 "block_size": 512, 00:08:50.876 "num_blocks": 131072, 00:08:50.876 "uuid": "512ccef0-9c8b-4cec-bcad-ce9f0dd47f53", 00:08:50.876 "assigned_rate_limits": { 00:08:50.876 "rw_ios_per_sec": 0, 00:08:50.876 "rw_mbytes_per_sec": 0, 00:08:50.876 "r_mbytes_per_sec": 0, 00:08:50.876 "w_mbytes_per_sec": 0 00:08:50.876 }, 00:08:50.876 "claimed": false, 00:08:50.876 "zoned": false, 00:08:50.876 "supported_io_types": { 00:08:50.876 "read": true, 00:08:50.876 "write": true, 00:08:50.876 "unmap": true, 00:08:50.876 "flush": true, 00:08:50.876 "reset": true, 00:08:50.876 "nvme_admin": false, 00:08:50.876 "nvme_io": false, 00:08:50.876 "nvme_io_md": false, 00:08:50.876 "write_zeroes": true, 00:08:50.876 "zcopy": false, 00:08:50.876 "get_zone_info": false, 00:08:50.876 "zone_management": false, 00:08:50.876 "zone_append": false, 00:08:50.876 "compare": false, 00:08:50.876 "compare_and_write": false, 00:08:50.876 "abort": false, 00:08:50.876 "seek_hole": false, 00:08:50.876 "seek_data": false, 00:08:50.876 "copy": false, 00:08:50.876 "nvme_iov_md": false 00:08:50.876 }, 00:08:50.876 "memory_domains": [ 00:08:50.876 { 00:08:50.876 "dma_device_id": "system", 00:08:50.876 "dma_device_type": 1 00:08:50.876 }, 00:08:50.876 { 00:08:50.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.876 "dma_device_type": 2 00:08:50.876 }, 00:08:50.876 { 00:08:50.876 "dma_device_id": "system", 00:08:50.876 "dma_device_type": 1 00:08:50.876 }, 00:08:50.876 { 00:08:50.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.876 "dma_device_type": 2 00:08:50.876 } 00:08:50.876 ], 00:08:50.876 "driver_specific": { 00:08:50.876 "raid": { 00:08:50.876 "uuid": "512ccef0-9c8b-4cec-bcad-ce9f0dd47f53", 00:08:50.876 "strip_size_kb": 64, 00:08:50.876 "state": "online", 00:08:50.876 "raid_level": "raid0", 00:08:50.876 "superblock": false, 00:08:50.876 "num_base_bdevs": 2, 00:08:50.876 "num_base_bdevs_discovered": 2, 00:08:50.876 "num_base_bdevs_operational": 2, 00:08:50.876 "base_bdevs_list": [ 00:08:50.876 { 00:08:50.876 "name": "BaseBdev1", 00:08:50.876 "uuid": "baacee5f-89bc-4cb1-a02c-102fb6ba8caf", 00:08:50.876 "is_configured": true, 00:08:50.876 "data_offset": 0, 00:08:50.876 "data_size": 65536 00:08:50.876 }, 00:08:50.876 { 00:08:50.876 "name": "BaseBdev2", 00:08:50.876 "uuid": "636d09c0-654e-4baa-bb6c-16b61dc3717a", 00:08:50.876 "is_configured": true, 00:08:50.876 "data_offset": 0, 00:08:50.876 "data_size": 65536 00:08:50.876 } 00:08:50.876 ] 00:08:50.876 } 00:08:50.876 } 00:08:50.876 }' 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:50.876 BaseBdev2' 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.876 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.138 [2024-11-05 11:24:50.217755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:51.138 [2024-11-05 11:24:50.217786] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.138 [2024-11-05 11:24:50.217835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:51.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.139 "name": "Existed_Raid", 00:08:51.139 "uuid": "512ccef0-9c8b-4cec-bcad-ce9f0dd47f53", 00:08:51.139 "strip_size_kb": 64, 00:08:51.139 "state": "offline", 00:08:51.139 "raid_level": "raid0", 00:08:51.139 "superblock": false, 00:08:51.139 "num_base_bdevs": 2, 00:08:51.139 "num_base_bdevs_discovered": 1, 00:08:51.139 "num_base_bdevs_operational": 1, 00:08:51.139 "base_bdevs_list": [ 00:08:51.139 { 00:08:51.139 "name": null, 00:08:51.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.139 "is_configured": false, 00:08:51.139 "data_offset": 0, 00:08:51.139 "data_size": 65536 00:08:51.139 }, 00:08:51.139 { 00:08:51.139 "name": "BaseBdev2", 00:08:51.139 "uuid": "636d09c0-654e-4baa-bb6c-16b61dc3717a", 00:08:51.139 "is_configured": true, 00:08:51.139 "data_offset": 0, 00:08:51.139 "data_size": 65536 00:08:51.139 } 00:08:51.139 ] 00:08:51.139 }' 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.139 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.709 [2024-11-05 11:24:50.811951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:51.709 [2024-11-05 11:24:50.812074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60840 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60840 ']' 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60840 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:51.709 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:51.710 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60840 00:08:51.968 killing process with pid 60840 00:08:51.968 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:51.968 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:51.968 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60840' 00:08:51.968 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60840 00:08:51.968 [2024-11-05 11:24:51.004496] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.968 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60840 00:08:51.968 [2024-11-05 11:24:51.020944] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:52.907 00:08:52.907 real 0m4.945s 00:08:52.907 user 0m7.152s 00:08:52.907 sys 0m0.792s 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:52.907 ************************************ 00:08:52.907 END TEST raid_state_function_test 00:08:52.907 ************************************ 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.907 11:24:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:52.907 11:24:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:52.907 11:24:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:52.907 11:24:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.907 ************************************ 00:08:52.907 START TEST raid_state_function_test_sb 00:08:52.907 ************************************ 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:52.907 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:53.167 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61093 00:08:53.167 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:53.167 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61093' 00:08:53.167 Process raid pid: 61093 00:08:53.167 11:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61093 00:08:53.167 11:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61093 ']' 00:08:53.167 11:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.167 11:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:53.167 11:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.167 11:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:53.167 11:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.167 [2024-11-05 11:24:52.286290] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:53.167 [2024-11-05 11:24:52.286485] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.426 [2024-11-05 11:24:52.460966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.426 [2024-11-05 11:24:52.573952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.686 [2024-11-05 11:24:52.772939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.686 [2024-11-05 11:24:52.773063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.946 [2024-11-05 11:24:53.135736] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.946 [2024-11-05 11:24:53.135863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.946 [2024-11-05 11:24:53.135895] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.946 [2024-11-05 11:24:53.135920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.946 "name": "Existed_Raid", 00:08:53.946 "uuid": "e9621ddd-04bc-4ba6-a10c-9db889dcd0e0", 00:08:53.946 "strip_size_kb": 64, 00:08:53.946 "state": "configuring", 00:08:53.946 "raid_level": "raid0", 00:08:53.946 "superblock": true, 00:08:53.946 "num_base_bdevs": 2, 00:08:53.946 "num_base_bdevs_discovered": 0, 00:08:53.946 "num_base_bdevs_operational": 2, 00:08:53.946 "base_bdevs_list": [ 00:08:53.946 { 00:08:53.946 "name": "BaseBdev1", 00:08:53.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.946 "is_configured": false, 00:08:53.946 "data_offset": 0, 00:08:53.946 "data_size": 0 00:08:53.946 }, 00:08:53.946 { 00:08:53.946 "name": "BaseBdev2", 00:08:53.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.946 "is_configured": false, 00:08:53.946 "data_offset": 0, 00:08:53.946 "data_size": 0 00:08:53.946 } 00:08:53.946 ] 00:08:53.946 }' 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.946 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.514 [2024-11-05 11:24:53.574966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.514 [2024-11-05 11:24:53.575004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.514 [2024-11-05 11:24:53.586939] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.514 [2024-11-05 11:24:53.587039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.514 [2024-11-05 11:24:53.587053] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.514 [2024-11-05 11:24:53.587065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.514 [2024-11-05 11:24:53.634628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.514 BaseBdev1 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.514 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.515 [ 00:08:54.515 { 00:08:54.515 "name": "BaseBdev1", 00:08:54.515 "aliases": [ 00:08:54.515 "3ba73732-8bc4-4d5c-8065-c287ffc6c8b1" 00:08:54.515 ], 00:08:54.515 "product_name": "Malloc disk", 00:08:54.515 "block_size": 512, 00:08:54.515 "num_blocks": 65536, 00:08:54.515 "uuid": "3ba73732-8bc4-4d5c-8065-c287ffc6c8b1", 00:08:54.515 "assigned_rate_limits": { 00:08:54.515 "rw_ios_per_sec": 0, 00:08:54.515 "rw_mbytes_per_sec": 0, 00:08:54.515 "r_mbytes_per_sec": 0, 00:08:54.515 "w_mbytes_per_sec": 0 00:08:54.515 }, 00:08:54.515 "claimed": true, 00:08:54.515 "claim_type": "exclusive_write", 00:08:54.515 "zoned": false, 00:08:54.515 "supported_io_types": { 00:08:54.515 "read": true, 00:08:54.515 "write": true, 00:08:54.515 "unmap": true, 00:08:54.515 "flush": true, 00:08:54.515 "reset": true, 00:08:54.515 "nvme_admin": false, 00:08:54.515 "nvme_io": false, 00:08:54.515 "nvme_io_md": false, 00:08:54.515 "write_zeroes": true, 00:08:54.515 "zcopy": true, 00:08:54.515 "get_zone_info": false, 00:08:54.515 "zone_management": false, 00:08:54.515 "zone_append": false, 00:08:54.515 "compare": false, 00:08:54.515 "compare_and_write": false, 00:08:54.515 "abort": true, 00:08:54.515 "seek_hole": false, 00:08:54.515 "seek_data": false, 00:08:54.515 "copy": true, 00:08:54.515 "nvme_iov_md": false 00:08:54.515 }, 00:08:54.515 "memory_domains": [ 00:08:54.515 { 00:08:54.515 "dma_device_id": "system", 00:08:54.515 "dma_device_type": 1 00:08:54.515 }, 00:08:54.515 { 00:08:54.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.515 "dma_device_type": 2 00:08:54.515 } 00:08:54.515 ], 00:08:54.515 "driver_specific": {} 00:08:54.515 } 00:08:54.515 ] 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.515 "name": "Existed_Raid", 00:08:54.515 "uuid": "a74371e8-063a-45a4-919e-4559e2e1c31a", 00:08:54.515 "strip_size_kb": 64, 00:08:54.515 "state": "configuring", 00:08:54.515 "raid_level": "raid0", 00:08:54.515 "superblock": true, 00:08:54.515 "num_base_bdevs": 2, 00:08:54.515 "num_base_bdevs_discovered": 1, 00:08:54.515 "num_base_bdevs_operational": 2, 00:08:54.515 "base_bdevs_list": [ 00:08:54.515 { 00:08:54.515 "name": "BaseBdev1", 00:08:54.515 "uuid": "3ba73732-8bc4-4d5c-8065-c287ffc6c8b1", 00:08:54.515 "is_configured": true, 00:08:54.515 "data_offset": 2048, 00:08:54.515 "data_size": 63488 00:08:54.515 }, 00:08:54.515 { 00:08:54.515 "name": "BaseBdev2", 00:08:54.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.515 "is_configured": false, 00:08:54.515 "data_offset": 0, 00:08:54.515 "data_size": 0 00:08:54.515 } 00:08:54.515 ] 00:08:54.515 }' 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.515 11:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.082 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.082 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.082 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.082 [2024-11-05 11:24:54.157797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.082 [2024-11-05 11:24:54.157899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:55.082 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.082 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:55.082 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.082 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.082 [2024-11-05 11:24:54.169825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.082 [2024-11-05 11:24:54.171698] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.082 [2024-11-05 11:24:54.171777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.082 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.082 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:55.082 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.083 "name": "Existed_Raid", 00:08:55.083 "uuid": "24ffcb37-ca3f-4162-af75-cecadc227425", 00:08:55.083 "strip_size_kb": 64, 00:08:55.083 "state": "configuring", 00:08:55.083 "raid_level": "raid0", 00:08:55.083 "superblock": true, 00:08:55.083 "num_base_bdevs": 2, 00:08:55.083 "num_base_bdevs_discovered": 1, 00:08:55.083 "num_base_bdevs_operational": 2, 00:08:55.083 "base_bdevs_list": [ 00:08:55.083 { 00:08:55.083 "name": "BaseBdev1", 00:08:55.083 "uuid": "3ba73732-8bc4-4d5c-8065-c287ffc6c8b1", 00:08:55.083 "is_configured": true, 00:08:55.083 "data_offset": 2048, 00:08:55.083 "data_size": 63488 00:08:55.083 }, 00:08:55.083 { 00:08:55.083 "name": "BaseBdev2", 00:08:55.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.083 "is_configured": false, 00:08:55.083 "data_offset": 0, 00:08:55.083 "data_size": 0 00:08:55.083 } 00:08:55.083 ] 00:08:55.083 }' 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.083 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.651 [2024-11-05 11:24:54.673171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.651 [2024-11-05 11:24:54.673435] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:55.651 [2024-11-05 11:24:54.673451] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:55.651 [2024-11-05 11:24:54.673738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:55.651 [2024-11-05 11:24:54.673909] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:55.651 [2024-11-05 11:24:54.673923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:55.651 BaseBdev2 00:08:55.651 [2024-11-05 11:24:54.674065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.651 [ 00:08:55.651 { 00:08:55.651 "name": "BaseBdev2", 00:08:55.651 "aliases": [ 00:08:55.651 "930650f7-0688-4105-bb27-f23ddadc2e9a" 00:08:55.651 ], 00:08:55.651 "product_name": "Malloc disk", 00:08:55.651 "block_size": 512, 00:08:55.651 "num_blocks": 65536, 00:08:55.651 "uuid": "930650f7-0688-4105-bb27-f23ddadc2e9a", 00:08:55.651 "assigned_rate_limits": { 00:08:55.651 "rw_ios_per_sec": 0, 00:08:55.651 "rw_mbytes_per_sec": 0, 00:08:55.651 "r_mbytes_per_sec": 0, 00:08:55.651 "w_mbytes_per_sec": 0 00:08:55.651 }, 00:08:55.651 "claimed": true, 00:08:55.651 "claim_type": "exclusive_write", 00:08:55.651 "zoned": false, 00:08:55.651 "supported_io_types": { 00:08:55.651 "read": true, 00:08:55.651 "write": true, 00:08:55.651 "unmap": true, 00:08:55.651 "flush": true, 00:08:55.651 "reset": true, 00:08:55.651 "nvme_admin": false, 00:08:55.651 "nvme_io": false, 00:08:55.651 "nvme_io_md": false, 00:08:55.651 "write_zeroes": true, 00:08:55.651 "zcopy": true, 00:08:55.651 "get_zone_info": false, 00:08:55.651 "zone_management": false, 00:08:55.651 "zone_append": false, 00:08:55.651 "compare": false, 00:08:55.651 "compare_and_write": false, 00:08:55.651 "abort": true, 00:08:55.651 "seek_hole": false, 00:08:55.651 "seek_data": false, 00:08:55.651 "copy": true, 00:08:55.651 "nvme_iov_md": false 00:08:55.651 }, 00:08:55.651 "memory_domains": [ 00:08:55.651 { 00:08:55.651 "dma_device_id": "system", 00:08:55.651 "dma_device_type": 1 00:08:55.651 }, 00:08:55.651 { 00:08:55.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.651 "dma_device_type": 2 00:08:55.651 } 00:08:55.651 ], 00:08:55.651 "driver_specific": {} 00:08:55.651 } 00:08:55.651 ] 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.651 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.652 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.652 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.652 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.652 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.652 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.652 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.652 "name": "Existed_Raid", 00:08:55.652 "uuid": "24ffcb37-ca3f-4162-af75-cecadc227425", 00:08:55.652 "strip_size_kb": 64, 00:08:55.652 "state": "online", 00:08:55.652 "raid_level": "raid0", 00:08:55.652 "superblock": true, 00:08:55.652 "num_base_bdevs": 2, 00:08:55.652 "num_base_bdevs_discovered": 2, 00:08:55.652 "num_base_bdevs_operational": 2, 00:08:55.652 "base_bdevs_list": [ 00:08:55.652 { 00:08:55.652 "name": "BaseBdev1", 00:08:55.652 "uuid": "3ba73732-8bc4-4d5c-8065-c287ffc6c8b1", 00:08:55.652 "is_configured": true, 00:08:55.652 "data_offset": 2048, 00:08:55.652 "data_size": 63488 00:08:55.652 }, 00:08:55.652 { 00:08:55.652 "name": "BaseBdev2", 00:08:55.652 "uuid": "930650f7-0688-4105-bb27-f23ddadc2e9a", 00:08:55.652 "is_configured": true, 00:08:55.652 "data_offset": 2048, 00:08:55.652 "data_size": 63488 00:08:55.652 } 00:08:55.652 ] 00:08:55.652 }' 00:08:55.652 11:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.652 11:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.910 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.910 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:55.910 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.910 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.910 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.910 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.910 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:55.910 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.910 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.910 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.910 [2024-11-05 11:24:55.164667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.910 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:56.169 "name": "Existed_Raid", 00:08:56.169 "aliases": [ 00:08:56.169 "24ffcb37-ca3f-4162-af75-cecadc227425" 00:08:56.169 ], 00:08:56.169 "product_name": "Raid Volume", 00:08:56.169 "block_size": 512, 00:08:56.169 "num_blocks": 126976, 00:08:56.169 "uuid": "24ffcb37-ca3f-4162-af75-cecadc227425", 00:08:56.169 "assigned_rate_limits": { 00:08:56.169 "rw_ios_per_sec": 0, 00:08:56.169 "rw_mbytes_per_sec": 0, 00:08:56.169 "r_mbytes_per_sec": 0, 00:08:56.169 "w_mbytes_per_sec": 0 00:08:56.169 }, 00:08:56.169 "claimed": false, 00:08:56.169 "zoned": false, 00:08:56.169 "supported_io_types": { 00:08:56.169 "read": true, 00:08:56.169 "write": true, 00:08:56.169 "unmap": true, 00:08:56.169 "flush": true, 00:08:56.169 "reset": true, 00:08:56.169 "nvme_admin": false, 00:08:56.169 "nvme_io": false, 00:08:56.169 "nvme_io_md": false, 00:08:56.169 "write_zeroes": true, 00:08:56.169 "zcopy": false, 00:08:56.169 "get_zone_info": false, 00:08:56.169 "zone_management": false, 00:08:56.169 "zone_append": false, 00:08:56.169 "compare": false, 00:08:56.169 "compare_and_write": false, 00:08:56.169 "abort": false, 00:08:56.169 "seek_hole": false, 00:08:56.169 "seek_data": false, 00:08:56.169 "copy": false, 00:08:56.169 "nvme_iov_md": false 00:08:56.169 }, 00:08:56.169 "memory_domains": [ 00:08:56.169 { 00:08:56.169 "dma_device_id": "system", 00:08:56.169 "dma_device_type": 1 00:08:56.169 }, 00:08:56.169 { 00:08:56.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.169 "dma_device_type": 2 00:08:56.169 }, 00:08:56.169 { 00:08:56.169 "dma_device_id": "system", 00:08:56.169 "dma_device_type": 1 00:08:56.169 }, 00:08:56.169 { 00:08:56.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.169 "dma_device_type": 2 00:08:56.169 } 00:08:56.169 ], 00:08:56.169 "driver_specific": { 00:08:56.169 "raid": { 00:08:56.169 "uuid": "24ffcb37-ca3f-4162-af75-cecadc227425", 00:08:56.169 "strip_size_kb": 64, 00:08:56.169 "state": "online", 00:08:56.169 "raid_level": "raid0", 00:08:56.169 "superblock": true, 00:08:56.169 "num_base_bdevs": 2, 00:08:56.169 "num_base_bdevs_discovered": 2, 00:08:56.169 "num_base_bdevs_operational": 2, 00:08:56.169 "base_bdevs_list": [ 00:08:56.169 { 00:08:56.169 "name": "BaseBdev1", 00:08:56.169 "uuid": "3ba73732-8bc4-4d5c-8065-c287ffc6c8b1", 00:08:56.169 "is_configured": true, 00:08:56.169 "data_offset": 2048, 00:08:56.169 "data_size": 63488 00:08:56.169 }, 00:08:56.169 { 00:08:56.169 "name": "BaseBdev2", 00:08:56.169 "uuid": "930650f7-0688-4105-bb27-f23ddadc2e9a", 00:08:56.169 "is_configured": true, 00:08:56.169 "data_offset": 2048, 00:08:56.169 "data_size": 63488 00:08:56.169 } 00:08:56.169 ] 00:08:56.169 } 00:08:56.169 } 00:08:56.169 }' 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:56.169 BaseBdev2' 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.169 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.169 [2024-11-05 11:24:55.384028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:56.169 [2024-11-05 11:24:55.384060] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.169 [2024-11-05 11:24:55.384110] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.428 "name": "Existed_Raid", 00:08:56.428 "uuid": "24ffcb37-ca3f-4162-af75-cecadc227425", 00:08:56.428 "strip_size_kb": 64, 00:08:56.428 "state": "offline", 00:08:56.428 "raid_level": "raid0", 00:08:56.428 "superblock": true, 00:08:56.428 "num_base_bdevs": 2, 00:08:56.428 "num_base_bdevs_discovered": 1, 00:08:56.428 "num_base_bdevs_operational": 1, 00:08:56.428 "base_bdevs_list": [ 00:08:56.428 { 00:08:56.428 "name": null, 00:08:56.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.428 "is_configured": false, 00:08:56.428 "data_offset": 0, 00:08:56.428 "data_size": 63488 00:08:56.428 }, 00:08:56.428 { 00:08:56.428 "name": "BaseBdev2", 00:08:56.428 "uuid": "930650f7-0688-4105-bb27-f23ddadc2e9a", 00:08:56.428 "is_configured": true, 00:08:56.428 "data_offset": 2048, 00:08:56.428 "data_size": 63488 00:08:56.428 } 00:08:56.428 ] 00:08:56.428 }' 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.428 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.687 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:56.687 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.687 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.687 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.687 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.687 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.687 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.947 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.947 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.947 11:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:56.947 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.947 11:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.947 [2024-11-05 11:24:55.985217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:56.947 [2024-11-05 11:24:55.985271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61093 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61093 ']' 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61093 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61093 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61093' 00:08:56.947 killing process with pid 61093 00:08:56.947 11:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61093 00:08:56.947 [2024-11-05 11:24:56.172967] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.948 11:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61093 00:08:56.948 [2024-11-05 11:24:56.188581] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:58.326 11:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:58.326 00:08:58.326 real 0m5.149s 00:08:58.326 user 0m7.472s 00:08:58.326 sys 0m0.840s 00:08:58.326 11:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:58.326 11:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.326 ************************************ 00:08:58.326 END TEST raid_state_function_test_sb 00:08:58.326 ************************************ 00:08:58.326 11:24:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:58.326 11:24:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:58.326 11:24:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:58.326 11:24:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.326 ************************************ 00:08:58.326 START TEST raid_superblock_test 00:08:58.326 ************************************ 00:08:58.326 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:08:58.326 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:58.326 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:58.326 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:58.326 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:58.326 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:58.326 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:58.326 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:58.326 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:58.326 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:58.326 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:58.326 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:58.327 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:58.327 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:58.327 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:58.327 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:58.327 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:58.327 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61345 00:08:58.327 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:58.327 11:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61345 00:08:58.327 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61345 ']' 00:08:58.327 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.327 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:58.327 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.327 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:58.327 11:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.327 [2024-11-05 11:24:57.472049] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:58.327 [2024-11-05 11:24:57.472308] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61345 ] 00:08:58.586 [2024-11-05 11:24:57.647848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.586 [2024-11-05 11:24:57.760615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.845 [2024-11-05 11:24:57.959360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.846 [2024-11-05 11:24:57.959524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.105 malloc1 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.105 [2024-11-05 11:24:58.359051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:59.105 [2024-11-05 11:24:58.359193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.105 [2024-11-05 11:24:58.359240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:59.105 [2024-11-05 11:24:58.359303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.105 [2024-11-05 11:24:58.361454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.105 [2024-11-05 11:24:58.361525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:59.105 pt1 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.105 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.365 malloc2 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.365 [2024-11-05 11:24:58.416261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:59.365 [2024-11-05 11:24:58.416352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.365 [2024-11-05 11:24:58.416379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:59.365 [2024-11-05 11:24:58.416388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.365 [2024-11-05 11:24:58.418395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.365 [2024-11-05 11:24:58.418430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:59.365 pt2 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.365 [2024-11-05 11:24:58.428314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:59.365 [2024-11-05 11:24:58.430062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.365 [2024-11-05 11:24:58.430238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:59.365 [2024-11-05 11:24:58.430253] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:59.365 [2024-11-05 11:24:58.430494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:59.365 [2024-11-05 11:24:58.430659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:59.365 [2024-11-05 11:24:58.430670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:59.365 [2024-11-05 11:24:58.430799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.365 "name": "raid_bdev1", 00:08:59.365 "uuid": "4edf1ae8-bfec-40d7-8782-8f17b9c40b50", 00:08:59.365 "strip_size_kb": 64, 00:08:59.365 "state": "online", 00:08:59.365 "raid_level": "raid0", 00:08:59.365 "superblock": true, 00:08:59.365 "num_base_bdevs": 2, 00:08:59.365 "num_base_bdevs_discovered": 2, 00:08:59.365 "num_base_bdevs_operational": 2, 00:08:59.365 "base_bdevs_list": [ 00:08:59.365 { 00:08:59.365 "name": "pt1", 00:08:59.365 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.365 "is_configured": true, 00:08:59.365 "data_offset": 2048, 00:08:59.365 "data_size": 63488 00:08:59.365 }, 00:08:59.365 { 00:08:59.365 "name": "pt2", 00:08:59.365 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.365 "is_configured": true, 00:08:59.365 "data_offset": 2048, 00:08:59.365 "data_size": 63488 00:08:59.365 } 00:08:59.365 ] 00:08:59.365 }' 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.365 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.624 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:59.624 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:59.624 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.624 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.624 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.624 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.624 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.624 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.624 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.624 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.624 [2024-11-05 11:24:58.895829] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.885 11:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.885 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.885 "name": "raid_bdev1", 00:08:59.885 "aliases": [ 00:08:59.885 "4edf1ae8-bfec-40d7-8782-8f17b9c40b50" 00:08:59.885 ], 00:08:59.885 "product_name": "Raid Volume", 00:08:59.885 "block_size": 512, 00:08:59.885 "num_blocks": 126976, 00:08:59.885 "uuid": "4edf1ae8-bfec-40d7-8782-8f17b9c40b50", 00:08:59.885 "assigned_rate_limits": { 00:08:59.885 "rw_ios_per_sec": 0, 00:08:59.885 "rw_mbytes_per_sec": 0, 00:08:59.885 "r_mbytes_per_sec": 0, 00:08:59.885 "w_mbytes_per_sec": 0 00:08:59.885 }, 00:08:59.885 "claimed": false, 00:08:59.885 "zoned": false, 00:08:59.885 "supported_io_types": { 00:08:59.885 "read": true, 00:08:59.885 "write": true, 00:08:59.885 "unmap": true, 00:08:59.885 "flush": true, 00:08:59.885 "reset": true, 00:08:59.885 "nvme_admin": false, 00:08:59.885 "nvme_io": false, 00:08:59.885 "nvme_io_md": false, 00:08:59.885 "write_zeroes": true, 00:08:59.885 "zcopy": false, 00:08:59.885 "get_zone_info": false, 00:08:59.885 "zone_management": false, 00:08:59.885 "zone_append": false, 00:08:59.885 "compare": false, 00:08:59.885 "compare_and_write": false, 00:08:59.885 "abort": false, 00:08:59.885 "seek_hole": false, 00:08:59.885 "seek_data": false, 00:08:59.885 "copy": false, 00:08:59.885 "nvme_iov_md": false 00:08:59.885 }, 00:08:59.885 "memory_domains": [ 00:08:59.885 { 00:08:59.885 "dma_device_id": "system", 00:08:59.885 "dma_device_type": 1 00:08:59.885 }, 00:08:59.885 { 00:08:59.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.885 "dma_device_type": 2 00:08:59.885 }, 00:08:59.885 { 00:08:59.885 "dma_device_id": "system", 00:08:59.885 "dma_device_type": 1 00:08:59.885 }, 00:08:59.885 { 00:08:59.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.885 "dma_device_type": 2 00:08:59.885 } 00:08:59.885 ], 00:08:59.885 "driver_specific": { 00:08:59.885 "raid": { 00:08:59.885 "uuid": "4edf1ae8-bfec-40d7-8782-8f17b9c40b50", 00:08:59.885 "strip_size_kb": 64, 00:08:59.885 "state": "online", 00:08:59.885 "raid_level": "raid0", 00:08:59.885 "superblock": true, 00:08:59.885 "num_base_bdevs": 2, 00:08:59.885 "num_base_bdevs_discovered": 2, 00:08:59.885 "num_base_bdevs_operational": 2, 00:08:59.885 "base_bdevs_list": [ 00:08:59.885 { 00:08:59.885 "name": "pt1", 00:08:59.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.886 "is_configured": true, 00:08:59.886 "data_offset": 2048, 00:08:59.886 "data_size": 63488 00:08:59.886 }, 00:08:59.886 { 00:08:59.886 "name": "pt2", 00:08:59.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.886 "is_configured": true, 00:08:59.886 "data_offset": 2048, 00:08:59.886 "data_size": 63488 00:08:59.886 } 00:08:59.886 ] 00:08:59.886 } 00:08:59.886 } 00:08:59.886 }' 00:08:59.886 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.886 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:59.886 pt2' 00:08:59.886 11:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.886 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.886 [2024-11-05 11:24:59.143505] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4edf1ae8-bfec-40d7-8782-8f17b9c40b50 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4edf1ae8-bfec-40d7-8782-8f17b9c40b50 ']' 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.148 [2024-11-05 11:24:59.175122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:00.148 [2024-11-05 11:24:59.175166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.148 [2024-11-05 11:24:59.175265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.148 [2024-11-05 11:24:59.175319] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.148 [2024-11-05 11:24:59.175335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.148 [2024-11-05 11:24:59.306905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:00.148 [2024-11-05 11:24:59.309016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:00.148 [2024-11-05 11:24:59.309146] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:00.148 [2024-11-05 11:24:59.309288] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:00.148 [2024-11-05 11:24:59.309399] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:00.148 [2024-11-05 11:24:59.309453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:00.148 request: 00:09:00.148 { 00:09:00.148 "name": "raid_bdev1", 00:09:00.148 "raid_level": "raid0", 00:09:00.148 "base_bdevs": [ 00:09:00.148 "malloc1", 00:09:00.148 "malloc2" 00:09:00.148 ], 00:09:00.148 "strip_size_kb": 64, 00:09:00.148 "superblock": false, 00:09:00.148 "method": "bdev_raid_create", 00:09:00.148 "req_id": 1 00:09:00.148 } 00:09:00.148 Got JSON-RPC error response 00:09:00.148 response: 00:09:00.148 { 00:09:00.148 "code": -17, 00:09:00.148 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:00.148 } 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.148 [2024-11-05 11:24:59.374776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:00.148 [2024-11-05 11:24:59.374902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.148 [2024-11-05 11:24:59.374940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:00.148 [2024-11-05 11:24:59.374979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.148 [2024-11-05 11:24:59.377303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.148 [2024-11-05 11:24:59.377385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:00.148 [2024-11-05 11:24:59.377526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:00.148 [2024-11-05 11:24:59.377639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:00.148 pt1 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.148 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.149 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.408 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.408 "name": "raid_bdev1", 00:09:00.408 "uuid": "4edf1ae8-bfec-40d7-8782-8f17b9c40b50", 00:09:00.408 "strip_size_kb": 64, 00:09:00.408 "state": "configuring", 00:09:00.408 "raid_level": "raid0", 00:09:00.408 "superblock": true, 00:09:00.408 "num_base_bdevs": 2, 00:09:00.408 "num_base_bdevs_discovered": 1, 00:09:00.408 "num_base_bdevs_operational": 2, 00:09:00.408 "base_bdevs_list": [ 00:09:00.408 { 00:09:00.408 "name": "pt1", 00:09:00.408 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.408 "is_configured": true, 00:09:00.408 "data_offset": 2048, 00:09:00.408 "data_size": 63488 00:09:00.408 }, 00:09:00.408 { 00:09:00.408 "name": null, 00:09:00.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.408 "is_configured": false, 00:09:00.408 "data_offset": 2048, 00:09:00.408 "data_size": 63488 00:09:00.408 } 00:09:00.408 ] 00:09:00.408 }' 00:09:00.408 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.408 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.667 [2024-11-05 11:24:59.826038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:00.667 [2024-11-05 11:24:59.826209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.667 [2024-11-05 11:24:59.826254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:00.667 [2024-11-05 11:24:59.826301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.667 [2024-11-05 11:24:59.826796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.667 [2024-11-05 11:24:59.826859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:00.667 [2024-11-05 11:24:59.826988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:00.667 [2024-11-05 11:24:59.827050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:00.667 [2024-11-05 11:24:59.827233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:00.667 [2024-11-05 11:24:59.827280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:00.667 [2024-11-05 11:24:59.827537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:00.667 [2024-11-05 11:24:59.827726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:00.667 [2024-11-05 11:24:59.827771] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:00.667 [2024-11-05 11:24:59.827968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.667 pt2 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.667 "name": "raid_bdev1", 00:09:00.667 "uuid": "4edf1ae8-bfec-40d7-8782-8f17b9c40b50", 00:09:00.667 "strip_size_kb": 64, 00:09:00.667 "state": "online", 00:09:00.667 "raid_level": "raid0", 00:09:00.667 "superblock": true, 00:09:00.667 "num_base_bdevs": 2, 00:09:00.667 "num_base_bdevs_discovered": 2, 00:09:00.667 "num_base_bdevs_operational": 2, 00:09:00.667 "base_bdevs_list": [ 00:09:00.667 { 00:09:00.667 "name": "pt1", 00:09:00.667 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.667 "is_configured": true, 00:09:00.667 "data_offset": 2048, 00:09:00.667 "data_size": 63488 00:09:00.667 }, 00:09:00.667 { 00:09:00.667 "name": "pt2", 00:09:00.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.667 "is_configured": true, 00:09:00.667 "data_offset": 2048, 00:09:00.667 "data_size": 63488 00:09:00.667 } 00:09:00.667 ] 00:09:00.667 }' 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.667 11:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.234 [2024-11-05 11:25:00.241548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.234 "name": "raid_bdev1", 00:09:01.234 "aliases": [ 00:09:01.234 "4edf1ae8-bfec-40d7-8782-8f17b9c40b50" 00:09:01.234 ], 00:09:01.234 "product_name": "Raid Volume", 00:09:01.234 "block_size": 512, 00:09:01.234 "num_blocks": 126976, 00:09:01.234 "uuid": "4edf1ae8-bfec-40d7-8782-8f17b9c40b50", 00:09:01.234 "assigned_rate_limits": { 00:09:01.234 "rw_ios_per_sec": 0, 00:09:01.234 "rw_mbytes_per_sec": 0, 00:09:01.234 "r_mbytes_per_sec": 0, 00:09:01.234 "w_mbytes_per_sec": 0 00:09:01.234 }, 00:09:01.234 "claimed": false, 00:09:01.234 "zoned": false, 00:09:01.234 "supported_io_types": { 00:09:01.234 "read": true, 00:09:01.234 "write": true, 00:09:01.234 "unmap": true, 00:09:01.234 "flush": true, 00:09:01.234 "reset": true, 00:09:01.234 "nvme_admin": false, 00:09:01.234 "nvme_io": false, 00:09:01.234 "nvme_io_md": false, 00:09:01.234 "write_zeroes": true, 00:09:01.234 "zcopy": false, 00:09:01.234 "get_zone_info": false, 00:09:01.234 "zone_management": false, 00:09:01.234 "zone_append": false, 00:09:01.234 "compare": false, 00:09:01.234 "compare_and_write": false, 00:09:01.234 "abort": false, 00:09:01.234 "seek_hole": false, 00:09:01.234 "seek_data": false, 00:09:01.234 "copy": false, 00:09:01.234 "nvme_iov_md": false 00:09:01.234 }, 00:09:01.234 "memory_domains": [ 00:09:01.234 { 00:09:01.234 "dma_device_id": "system", 00:09:01.234 "dma_device_type": 1 00:09:01.234 }, 00:09:01.234 { 00:09:01.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.234 "dma_device_type": 2 00:09:01.234 }, 00:09:01.234 { 00:09:01.234 "dma_device_id": "system", 00:09:01.234 "dma_device_type": 1 00:09:01.234 }, 00:09:01.234 { 00:09:01.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.234 "dma_device_type": 2 00:09:01.234 } 00:09:01.234 ], 00:09:01.234 "driver_specific": { 00:09:01.234 "raid": { 00:09:01.234 "uuid": "4edf1ae8-bfec-40d7-8782-8f17b9c40b50", 00:09:01.234 "strip_size_kb": 64, 00:09:01.234 "state": "online", 00:09:01.234 "raid_level": "raid0", 00:09:01.234 "superblock": true, 00:09:01.234 "num_base_bdevs": 2, 00:09:01.234 "num_base_bdevs_discovered": 2, 00:09:01.234 "num_base_bdevs_operational": 2, 00:09:01.234 "base_bdevs_list": [ 00:09:01.234 { 00:09:01.234 "name": "pt1", 00:09:01.234 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:01.234 "is_configured": true, 00:09:01.234 "data_offset": 2048, 00:09:01.234 "data_size": 63488 00:09:01.234 }, 00:09:01.234 { 00:09:01.234 "name": "pt2", 00:09:01.234 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.234 "is_configured": true, 00:09:01.234 "data_offset": 2048, 00:09:01.234 "data_size": 63488 00:09:01.234 } 00:09:01.234 ] 00:09:01.234 } 00:09:01.234 } 00:09:01.234 }' 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:01.234 pt2' 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.234 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.235 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.235 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.235 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.235 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:01.235 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.235 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.235 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.235 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.494 [2024-11-05 11:25:00.521041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4edf1ae8-bfec-40d7-8782-8f17b9c40b50 '!=' 4edf1ae8-bfec-40d7-8782-8f17b9c40b50 ']' 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61345 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61345 ']' 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61345 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61345 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61345' 00:09:01.494 killing process with pid 61345 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61345 00:09:01.494 [2024-11-05 11:25:00.598483] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.494 [2024-11-05 11:25:00.598628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.494 11:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61345 00:09:01.494 [2024-11-05 11:25:00.598703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.494 [2024-11-05 11:25:00.598720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:01.754 [2024-11-05 11:25:00.802258] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.692 11:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:02.692 00:09:02.692 real 0m4.540s 00:09:02.692 user 0m6.363s 00:09:02.692 sys 0m0.791s 00:09:02.692 11:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:02.692 11:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.692 ************************************ 00:09:02.692 END TEST raid_superblock_test 00:09:02.692 ************************************ 00:09:02.950 11:25:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:09:02.950 11:25:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:02.950 11:25:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:02.950 11:25:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.950 ************************************ 00:09:02.950 START TEST raid_read_error_test 00:09:02.950 ************************************ 00:09:02.950 11:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:09:02.950 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:02.950 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:02.950 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:02.950 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:02.950 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.950 11:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:02.950 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.950 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.950 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:02.950 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.950 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.62Lfog8yav 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61551 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61551 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61551 ']' 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:02.951 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.951 [2024-11-05 11:25:02.110397] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:09:02.951 [2024-11-05 11:25:02.110535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61551 ] 00:09:03.209 [2024-11-05 11:25:02.273746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.209 [2024-11-05 11:25:02.386229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.468 [2024-11-05 11:25:02.581569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.468 [2024-11-05 11:25:02.581606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.736 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:03.736 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:03.736 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.736 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:03.736 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.736 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.736 BaseBdev1_malloc 00:09:03.736 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.736 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:03.736 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.736 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.736 true 00:09:03.736 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.736 11:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:03.736 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.736 11:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.010 [2024-11-05 11:25:03.005978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:04.010 [2024-11-05 11:25:03.006047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.010 [2024-11-05 11:25:03.006069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:04.010 [2024-11-05 11:25:03.006081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.010 [2024-11-05 11:25:03.008408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.010 [2024-11-05 11:25:03.008535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:04.010 BaseBdev1 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.010 BaseBdev2_malloc 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.010 true 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.010 [2024-11-05 11:25:03.073116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:04.010 [2024-11-05 11:25:03.073187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.010 [2024-11-05 11:25:03.073204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:04.010 [2024-11-05 11:25:03.073213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.010 [2024-11-05 11:25:03.075296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.010 [2024-11-05 11:25:03.075336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:04.010 BaseBdev2 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.010 [2024-11-05 11:25:03.085167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.010 [2024-11-05 11:25:03.086916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.010 [2024-11-05 11:25:03.087131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:04.010 [2024-11-05 11:25:03.087160] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:04.010 [2024-11-05 11:25:03.087388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:04.010 [2024-11-05 11:25:03.087561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:04.010 [2024-11-05 11:25:03.087583] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:04.010 [2024-11-05 11:25:03.087734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.010 "name": "raid_bdev1", 00:09:04.010 "uuid": "34069191-62f8-47e8-96f7-52e968c4c886", 00:09:04.010 "strip_size_kb": 64, 00:09:04.010 "state": "online", 00:09:04.010 "raid_level": "raid0", 00:09:04.010 "superblock": true, 00:09:04.010 "num_base_bdevs": 2, 00:09:04.010 "num_base_bdevs_discovered": 2, 00:09:04.010 "num_base_bdevs_operational": 2, 00:09:04.010 "base_bdevs_list": [ 00:09:04.010 { 00:09:04.010 "name": "BaseBdev1", 00:09:04.010 "uuid": "64bb5f8a-9d97-5fd9-9062-49a6a4310c24", 00:09:04.010 "is_configured": true, 00:09:04.010 "data_offset": 2048, 00:09:04.010 "data_size": 63488 00:09:04.010 }, 00:09:04.010 { 00:09:04.010 "name": "BaseBdev2", 00:09:04.010 "uuid": "af880b36-fecf-50e5-9906-b90859d90734", 00:09:04.010 "is_configured": true, 00:09:04.010 "data_offset": 2048, 00:09:04.010 "data_size": 63488 00:09:04.010 } 00:09:04.010 ] 00:09:04.010 }' 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.010 11:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.269 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:04.269 11:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:04.528 [2024-11-05 11:25:03.629772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.465 "name": "raid_bdev1", 00:09:05.465 "uuid": "34069191-62f8-47e8-96f7-52e968c4c886", 00:09:05.465 "strip_size_kb": 64, 00:09:05.465 "state": "online", 00:09:05.465 "raid_level": "raid0", 00:09:05.465 "superblock": true, 00:09:05.465 "num_base_bdevs": 2, 00:09:05.465 "num_base_bdevs_discovered": 2, 00:09:05.465 "num_base_bdevs_operational": 2, 00:09:05.465 "base_bdevs_list": [ 00:09:05.465 { 00:09:05.465 "name": "BaseBdev1", 00:09:05.465 "uuid": "64bb5f8a-9d97-5fd9-9062-49a6a4310c24", 00:09:05.465 "is_configured": true, 00:09:05.465 "data_offset": 2048, 00:09:05.465 "data_size": 63488 00:09:05.465 }, 00:09:05.465 { 00:09:05.465 "name": "BaseBdev2", 00:09:05.465 "uuid": "af880b36-fecf-50e5-9906-b90859d90734", 00:09:05.465 "is_configured": true, 00:09:05.465 "data_offset": 2048, 00:09:05.465 "data_size": 63488 00:09:05.465 } 00:09:05.465 ] 00:09:05.465 }' 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.465 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.725 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:05.725 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.725 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.725 [2024-11-05 11:25:04.993520] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.725 [2024-11-05 11:25:04.993568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.725 [2024-11-05 11:25:04.996195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.725 [2024-11-05 11:25:04.996242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.725 [2024-11-05 11:25:04.996275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.725 [2024-11-05 11:25:04.996286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:05.725 { 00:09:05.725 "results": [ 00:09:05.725 { 00:09:05.725 "job": "raid_bdev1", 00:09:05.725 "core_mask": "0x1", 00:09:05.725 "workload": "randrw", 00:09:05.725 "percentage": 50, 00:09:05.725 "status": "finished", 00:09:05.725 "queue_depth": 1, 00:09:05.725 "io_size": 131072, 00:09:05.725 "runtime": 1.364533, 00:09:05.725 "iops": 16117.602139339979, 00:09:05.725 "mibps": 2014.7002674174973, 00:09:05.725 "io_failed": 1, 00:09:05.725 "io_timeout": 0, 00:09:05.725 "avg_latency_us": 86.14808405468264, 00:09:05.725 "min_latency_us": 25.7117903930131, 00:09:05.725 "max_latency_us": 1430.9170305676855 00:09:05.725 } 00:09:05.725 ], 00:09:05.725 "core_count": 1 00:09:05.725 } 00:09:05.725 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.725 11:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61551 00:09:05.725 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61551 ']' 00:09:05.725 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61551 00:09:05.984 11:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:05.984 11:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:05.984 11:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61551 00:09:05.984 killing process with pid 61551 00:09:05.984 11:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:05.984 11:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:05.984 11:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61551' 00:09:05.984 11:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61551 00:09:05.984 [2024-11-05 11:25:05.040044] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.984 11:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61551 00:09:05.984 [2024-11-05 11:25:05.171702] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.364 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.62Lfog8yav 00:09:07.364 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:07.364 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:07.364 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:07.364 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:07.364 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:07.364 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:07.364 ************************************ 00:09:07.364 END TEST raid_read_error_test 00:09:07.364 ************************************ 00:09:07.364 11:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:07.364 00:09:07.364 real 0m4.367s 00:09:07.364 user 0m5.225s 00:09:07.364 sys 0m0.571s 00:09:07.364 11:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:07.364 11:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.364 11:25:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:07.364 11:25:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:07.364 11:25:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:07.364 11:25:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.365 ************************************ 00:09:07.365 START TEST raid_write_error_test 00:09:07.365 ************************************ 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Tafhden1z0 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61691 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61691 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61691 ']' 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:07.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:07.365 11:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.365 [2024-11-05 11:25:06.543723] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:09:07.365 [2024-11-05 11:25:06.544340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61691 ] 00:09:07.624 [2024-11-05 11:25:06.717193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.624 [2024-11-05 11:25:06.842389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.884 [2024-11-05 11:25:07.057679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.884 [2024-11-05 11:25:07.057723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.453 BaseBdev1_malloc 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.453 true 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.453 [2024-11-05 11:25:07.513619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:08.453 [2024-11-05 11:25:07.513767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.453 [2024-11-05 11:25:07.513805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:08.453 [2024-11-05 11:25:07.513839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.453 [2024-11-05 11:25:07.516051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.453 [2024-11-05 11:25:07.516151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:08.453 BaseBdev1 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.453 BaseBdev2_malloc 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.453 true 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.453 [2024-11-05 11:25:07.581902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:08.453 [2024-11-05 11:25:07.582036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.453 [2024-11-05 11:25:07.582069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:08.453 [2024-11-05 11:25:07.582100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.453 [2024-11-05 11:25:07.584213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.453 [2024-11-05 11:25:07.584291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:08.453 BaseBdev2 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.453 [2024-11-05 11:25:07.593934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.453 [2024-11-05 11:25:07.595984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.453 [2024-11-05 11:25:07.596247] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:08.453 [2024-11-05 11:25:07.596305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:08.453 [2024-11-05 11:25:07.596624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:08.453 [2024-11-05 11:25:07.596862] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:08.453 [2024-11-05 11:25:07.596919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:08.453 [2024-11-05 11:25:07.597111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.453 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.454 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.454 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.454 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.454 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.454 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.454 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.454 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.454 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.454 "name": "raid_bdev1", 00:09:08.454 "uuid": "691aaf42-638b-4792-9b45-9952406d34df", 00:09:08.454 "strip_size_kb": 64, 00:09:08.454 "state": "online", 00:09:08.454 "raid_level": "raid0", 00:09:08.454 "superblock": true, 00:09:08.454 "num_base_bdevs": 2, 00:09:08.454 "num_base_bdevs_discovered": 2, 00:09:08.454 "num_base_bdevs_operational": 2, 00:09:08.454 "base_bdevs_list": [ 00:09:08.454 { 00:09:08.454 "name": "BaseBdev1", 00:09:08.454 "uuid": "516d2cf8-26be-5361-9522-c44494502399", 00:09:08.454 "is_configured": true, 00:09:08.454 "data_offset": 2048, 00:09:08.454 "data_size": 63488 00:09:08.454 }, 00:09:08.454 { 00:09:08.454 "name": "BaseBdev2", 00:09:08.454 "uuid": "9e753ff1-5954-53de-800a-52a7d95fa8f2", 00:09:08.454 "is_configured": true, 00:09:08.454 "data_offset": 2048, 00:09:08.454 "data_size": 63488 00:09:08.454 } 00:09:08.454 ] 00:09:08.454 }' 00:09:08.454 11:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.454 11:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.023 11:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:09.023 11:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:09.023 [2024-11-05 11:25:08.142385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.961 "name": "raid_bdev1", 00:09:09.961 "uuid": "691aaf42-638b-4792-9b45-9952406d34df", 00:09:09.961 "strip_size_kb": 64, 00:09:09.961 "state": "online", 00:09:09.961 "raid_level": "raid0", 00:09:09.961 "superblock": true, 00:09:09.961 "num_base_bdevs": 2, 00:09:09.961 "num_base_bdevs_discovered": 2, 00:09:09.961 "num_base_bdevs_operational": 2, 00:09:09.961 "base_bdevs_list": [ 00:09:09.961 { 00:09:09.961 "name": "BaseBdev1", 00:09:09.961 "uuid": "516d2cf8-26be-5361-9522-c44494502399", 00:09:09.961 "is_configured": true, 00:09:09.961 "data_offset": 2048, 00:09:09.961 "data_size": 63488 00:09:09.961 }, 00:09:09.961 { 00:09:09.961 "name": "BaseBdev2", 00:09:09.961 "uuid": "9e753ff1-5954-53de-800a-52a7d95fa8f2", 00:09:09.961 "is_configured": true, 00:09:09.961 "data_offset": 2048, 00:09:09.961 "data_size": 63488 00:09:09.961 } 00:09:09.961 ] 00:09:09.961 }' 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.961 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.220 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:10.220 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.220 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.220 [2024-11-05 11:25:09.468736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:10.220 [2024-11-05 11:25:09.468869] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:10.220 [2024-11-05 11:25:09.471925] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.220 [2024-11-05 11:25:09.472020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.220 [2024-11-05 11:25:09.472076] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:10.220 [2024-11-05 11:25:09.472125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:10.220 { 00:09:10.220 "results": [ 00:09:10.220 { 00:09:10.220 "job": "raid_bdev1", 00:09:10.220 "core_mask": "0x1", 00:09:10.220 "workload": "randrw", 00:09:10.220 "percentage": 50, 00:09:10.220 "status": "finished", 00:09:10.220 "queue_depth": 1, 00:09:10.220 "io_size": 131072, 00:09:10.220 "runtime": 1.327063, 00:09:10.220 "iops": 15500.394480141485, 00:09:10.220 "mibps": 1937.5493100176857, 00:09:10.220 "io_failed": 1, 00:09:10.220 "io_timeout": 0, 00:09:10.220 "avg_latency_us": 89.43388494295716, 00:09:10.220 "min_latency_us": 25.9353711790393, 00:09:10.220 "max_latency_us": 1488.1537117903931 00:09:10.220 } 00:09:10.220 ], 00:09:10.220 "core_count": 1 00:09:10.220 } 00:09:10.220 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.220 11:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61691 00:09:10.220 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61691 ']' 00:09:10.220 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61691 00:09:10.220 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:10.220 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:10.220 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61691 00:09:10.479 killing process with pid 61691 00:09:10.479 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:10.479 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:10.479 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61691' 00:09:10.479 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61691 00:09:10.479 [2024-11-05 11:25:09.521922] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:10.479 11:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61691 00:09:10.479 [2024-11-05 11:25:09.669528] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.854 11:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Tafhden1z0 00:09:11.854 11:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:11.854 11:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:11.854 11:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:11.855 11:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:11.855 11:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.855 11:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.855 ************************************ 00:09:11.855 END TEST raid_write_error_test 00:09:11.855 ************************************ 00:09:11.855 11:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:11.855 00:09:11.855 real 0m4.470s 00:09:11.855 user 0m5.331s 00:09:11.855 sys 0m0.580s 00:09:11.855 11:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:11.855 11:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.855 11:25:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:11.855 11:25:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:11.855 11:25:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:11.855 11:25:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:11.855 11:25:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.855 ************************************ 00:09:11.855 START TEST raid_state_function_test 00:09:11.855 ************************************ 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61835 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61835' 00:09:11.855 Process raid pid: 61835 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61835 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61835 ']' 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:11.855 11:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.855 [2024-11-05 11:25:11.072919] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:09:11.855 [2024-11-05 11:25:11.073097] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.114 [2024-11-05 11:25:11.248766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.114 [2024-11-05 11:25:11.367988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.373 [2024-11-05 11:25:11.583631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.373 [2024-11-05 11:25:11.583762] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.940 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:12.940 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:12.940 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:12.940 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.940 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.940 [2024-11-05 11:25:11.953233] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.941 [2024-11-05 11:25:11.953372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.941 [2024-11-05 11:25:11.953410] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:12.941 [2024-11-05 11:25:11.953434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.941 "name": "Existed_Raid", 00:09:12.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.941 "strip_size_kb": 64, 00:09:12.941 "state": "configuring", 00:09:12.941 "raid_level": "concat", 00:09:12.941 "superblock": false, 00:09:12.941 "num_base_bdevs": 2, 00:09:12.941 "num_base_bdevs_discovered": 0, 00:09:12.941 "num_base_bdevs_operational": 2, 00:09:12.941 "base_bdevs_list": [ 00:09:12.941 { 00:09:12.941 "name": "BaseBdev1", 00:09:12.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.941 "is_configured": false, 00:09:12.941 "data_offset": 0, 00:09:12.941 "data_size": 0 00:09:12.941 }, 00:09:12.941 { 00:09:12.941 "name": "BaseBdev2", 00:09:12.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.941 "is_configured": false, 00:09:12.941 "data_offset": 0, 00:09:12.941 "data_size": 0 00:09:12.941 } 00:09:12.941 ] 00:09:12.941 }' 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.941 11:25:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.200 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.200 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.200 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.200 [2024-11-05 11:25:12.372486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.200 [2024-11-05 11:25:12.372618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:13.200 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.200 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:13.200 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.200 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.200 [2024-11-05 11:25:12.384445] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.200 [2024-11-05 11:25:12.384562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.200 [2024-11-05 11:25:12.384578] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.200 [2024-11-05 11:25:12.384591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.200 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.200 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.200 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.200 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.200 [2024-11-05 11:25:12.435674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.201 BaseBdev1 00:09:13.201 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.201 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:13.201 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:13.201 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:13.201 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:13.201 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:13.201 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:13.201 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:13.201 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.201 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.201 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.201 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.201 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.201 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.201 [ 00:09:13.201 { 00:09:13.201 "name": "BaseBdev1", 00:09:13.201 "aliases": [ 00:09:13.201 "fc7885dd-6da3-4abe-9cc0-b8b0b9e0191e" 00:09:13.201 ], 00:09:13.201 "product_name": "Malloc disk", 00:09:13.201 "block_size": 512, 00:09:13.201 "num_blocks": 65536, 00:09:13.201 "uuid": "fc7885dd-6da3-4abe-9cc0-b8b0b9e0191e", 00:09:13.201 "assigned_rate_limits": { 00:09:13.201 "rw_ios_per_sec": 0, 00:09:13.201 "rw_mbytes_per_sec": 0, 00:09:13.201 "r_mbytes_per_sec": 0, 00:09:13.201 "w_mbytes_per_sec": 0 00:09:13.201 }, 00:09:13.201 "claimed": true, 00:09:13.201 "claim_type": "exclusive_write", 00:09:13.201 "zoned": false, 00:09:13.201 "supported_io_types": { 00:09:13.201 "read": true, 00:09:13.201 "write": true, 00:09:13.201 "unmap": true, 00:09:13.201 "flush": true, 00:09:13.201 "reset": true, 00:09:13.201 "nvme_admin": false, 00:09:13.201 "nvme_io": false, 00:09:13.201 "nvme_io_md": false, 00:09:13.201 "write_zeroes": true, 00:09:13.201 "zcopy": true, 00:09:13.201 "get_zone_info": false, 00:09:13.201 "zone_management": false, 00:09:13.201 "zone_append": false, 00:09:13.201 "compare": false, 00:09:13.201 "compare_and_write": false, 00:09:13.201 "abort": true, 00:09:13.201 "seek_hole": false, 00:09:13.201 "seek_data": false, 00:09:13.201 "copy": true, 00:09:13.201 "nvme_iov_md": false 00:09:13.201 }, 00:09:13.201 "memory_domains": [ 00:09:13.201 { 00:09:13.201 "dma_device_id": "system", 00:09:13.201 "dma_device_type": 1 00:09:13.201 }, 00:09:13.201 { 00:09:13.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.201 "dma_device_type": 2 00:09:13.201 } 00:09:13.201 ], 00:09:13.462 "driver_specific": {} 00:09:13.462 } 00:09:13.462 ] 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.462 "name": "Existed_Raid", 00:09:13.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.462 "strip_size_kb": 64, 00:09:13.462 "state": "configuring", 00:09:13.462 "raid_level": "concat", 00:09:13.462 "superblock": false, 00:09:13.462 "num_base_bdevs": 2, 00:09:13.462 "num_base_bdevs_discovered": 1, 00:09:13.462 "num_base_bdevs_operational": 2, 00:09:13.462 "base_bdevs_list": [ 00:09:13.462 { 00:09:13.462 "name": "BaseBdev1", 00:09:13.462 "uuid": "fc7885dd-6da3-4abe-9cc0-b8b0b9e0191e", 00:09:13.462 "is_configured": true, 00:09:13.462 "data_offset": 0, 00:09:13.462 "data_size": 65536 00:09:13.462 }, 00:09:13.462 { 00:09:13.462 "name": "BaseBdev2", 00:09:13.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.462 "is_configured": false, 00:09:13.462 "data_offset": 0, 00:09:13.462 "data_size": 0 00:09:13.462 } 00:09:13.462 ] 00:09:13.462 }' 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.462 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.722 [2024-11-05 11:25:12.934929] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.722 [2024-11-05 11:25:12.935088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.722 [2024-11-05 11:25:12.942937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.722 [2024-11-05 11:25:12.944988] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.722 [2024-11-05 11:25:12.945070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.722 "name": "Existed_Raid", 00:09:13.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.722 "strip_size_kb": 64, 00:09:13.722 "state": "configuring", 00:09:13.722 "raid_level": "concat", 00:09:13.722 "superblock": false, 00:09:13.722 "num_base_bdevs": 2, 00:09:13.722 "num_base_bdevs_discovered": 1, 00:09:13.722 "num_base_bdevs_operational": 2, 00:09:13.722 "base_bdevs_list": [ 00:09:13.722 { 00:09:13.722 "name": "BaseBdev1", 00:09:13.722 "uuid": "fc7885dd-6da3-4abe-9cc0-b8b0b9e0191e", 00:09:13.722 "is_configured": true, 00:09:13.722 "data_offset": 0, 00:09:13.722 "data_size": 65536 00:09:13.722 }, 00:09:13.722 { 00:09:13.722 "name": "BaseBdev2", 00:09:13.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.722 "is_configured": false, 00:09:13.722 "data_offset": 0, 00:09:13.722 "data_size": 0 00:09:13.722 } 00:09:13.722 ] 00:09:13.722 }' 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.722 11:25:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.292 [2024-11-05 11:25:13.410906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.292 [2024-11-05 11:25:13.411048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:14.292 [2024-11-05 11:25:13.411061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:14.292 [2024-11-05 11:25:13.411415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:14.292 [2024-11-05 11:25:13.411597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:14.292 [2024-11-05 11:25:13.411619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:14.292 [2024-11-05 11:25:13.411928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.292 BaseBdev2 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.292 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.292 [ 00:09:14.292 { 00:09:14.292 "name": "BaseBdev2", 00:09:14.292 "aliases": [ 00:09:14.292 "71cfbd8c-c844-4a26-82f9-bf050976058d" 00:09:14.292 ], 00:09:14.292 "product_name": "Malloc disk", 00:09:14.292 "block_size": 512, 00:09:14.292 "num_blocks": 65536, 00:09:14.292 "uuid": "71cfbd8c-c844-4a26-82f9-bf050976058d", 00:09:14.292 "assigned_rate_limits": { 00:09:14.292 "rw_ios_per_sec": 0, 00:09:14.292 "rw_mbytes_per_sec": 0, 00:09:14.292 "r_mbytes_per_sec": 0, 00:09:14.292 "w_mbytes_per_sec": 0 00:09:14.292 }, 00:09:14.292 "claimed": true, 00:09:14.292 "claim_type": "exclusive_write", 00:09:14.292 "zoned": false, 00:09:14.292 "supported_io_types": { 00:09:14.292 "read": true, 00:09:14.293 "write": true, 00:09:14.293 "unmap": true, 00:09:14.293 "flush": true, 00:09:14.293 "reset": true, 00:09:14.293 "nvme_admin": false, 00:09:14.293 "nvme_io": false, 00:09:14.293 "nvme_io_md": false, 00:09:14.293 "write_zeroes": true, 00:09:14.293 "zcopy": true, 00:09:14.293 "get_zone_info": false, 00:09:14.293 "zone_management": false, 00:09:14.293 "zone_append": false, 00:09:14.293 "compare": false, 00:09:14.293 "compare_and_write": false, 00:09:14.293 "abort": true, 00:09:14.293 "seek_hole": false, 00:09:14.293 "seek_data": false, 00:09:14.293 "copy": true, 00:09:14.293 "nvme_iov_md": false 00:09:14.293 }, 00:09:14.293 "memory_domains": [ 00:09:14.293 { 00:09:14.293 "dma_device_id": "system", 00:09:14.293 "dma_device_type": 1 00:09:14.293 }, 00:09:14.293 { 00:09:14.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.293 "dma_device_type": 2 00:09:14.293 } 00:09:14.293 ], 00:09:14.293 "driver_specific": {} 00:09:14.293 } 00:09:14.293 ] 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.293 "name": "Existed_Raid", 00:09:14.293 "uuid": "8ad14b8c-29b9-4ab2-8de5-7bab638d2c07", 00:09:14.293 "strip_size_kb": 64, 00:09:14.293 "state": "online", 00:09:14.293 "raid_level": "concat", 00:09:14.293 "superblock": false, 00:09:14.293 "num_base_bdevs": 2, 00:09:14.293 "num_base_bdevs_discovered": 2, 00:09:14.293 "num_base_bdevs_operational": 2, 00:09:14.293 "base_bdevs_list": [ 00:09:14.293 { 00:09:14.293 "name": "BaseBdev1", 00:09:14.293 "uuid": "fc7885dd-6da3-4abe-9cc0-b8b0b9e0191e", 00:09:14.293 "is_configured": true, 00:09:14.293 "data_offset": 0, 00:09:14.293 "data_size": 65536 00:09:14.293 }, 00:09:14.293 { 00:09:14.293 "name": "BaseBdev2", 00:09:14.293 "uuid": "71cfbd8c-c844-4a26-82f9-bf050976058d", 00:09:14.293 "is_configured": true, 00:09:14.293 "data_offset": 0, 00:09:14.293 "data_size": 65536 00:09:14.293 } 00:09:14.293 ] 00:09:14.293 }' 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.293 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.863 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:14.863 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:14.863 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.863 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.863 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.863 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.863 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.863 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:14.863 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.863 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.863 [2024-11-05 11:25:13.918452] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.863 11:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.863 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.863 "name": "Existed_Raid", 00:09:14.863 "aliases": [ 00:09:14.863 "8ad14b8c-29b9-4ab2-8de5-7bab638d2c07" 00:09:14.863 ], 00:09:14.863 "product_name": "Raid Volume", 00:09:14.863 "block_size": 512, 00:09:14.863 "num_blocks": 131072, 00:09:14.863 "uuid": "8ad14b8c-29b9-4ab2-8de5-7bab638d2c07", 00:09:14.863 "assigned_rate_limits": { 00:09:14.863 "rw_ios_per_sec": 0, 00:09:14.863 "rw_mbytes_per_sec": 0, 00:09:14.863 "r_mbytes_per_sec": 0, 00:09:14.863 "w_mbytes_per_sec": 0 00:09:14.863 }, 00:09:14.863 "claimed": false, 00:09:14.863 "zoned": false, 00:09:14.863 "supported_io_types": { 00:09:14.863 "read": true, 00:09:14.863 "write": true, 00:09:14.863 "unmap": true, 00:09:14.863 "flush": true, 00:09:14.863 "reset": true, 00:09:14.863 "nvme_admin": false, 00:09:14.863 "nvme_io": false, 00:09:14.863 "nvme_io_md": false, 00:09:14.863 "write_zeroes": true, 00:09:14.863 "zcopy": false, 00:09:14.863 "get_zone_info": false, 00:09:14.863 "zone_management": false, 00:09:14.863 "zone_append": false, 00:09:14.863 "compare": false, 00:09:14.863 "compare_and_write": false, 00:09:14.863 "abort": false, 00:09:14.863 "seek_hole": false, 00:09:14.863 "seek_data": false, 00:09:14.863 "copy": false, 00:09:14.863 "nvme_iov_md": false 00:09:14.863 }, 00:09:14.863 "memory_domains": [ 00:09:14.863 { 00:09:14.863 "dma_device_id": "system", 00:09:14.863 "dma_device_type": 1 00:09:14.863 }, 00:09:14.863 { 00:09:14.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.863 "dma_device_type": 2 00:09:14.863 }, 00:09:14.863 { 00:09:14.863 "dma_device_id": "system", 00:09:14.863 "dma_device_type": 1 00:09:14.863 }, 00:09:14.863 { 00:09:14.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.863 "dma_device_type": 2 00:09:14.863 } 00:09:14.863 ], 00:09:14.863 "driver_specific": { 00:09:14.863 "raid": { 00:09:14.863 "uuid": "8ad14b8c-29b9-4ab2-8de5-7bab638d2c07", 00:09:14.863 "strip_size_kb": 64, 00:09:14.863 "state": "online", 00:09:14.863 "raid_level": "concat", 00:09:14.863 "superblock": false, 00:09:14.863 "num_base_bdevs": 2, 00:09:14.863 "num_base_bdevs_discovered": 2, 00:09:14.863 "num_base_bdevs_operational": 2, 00:09:14.863 "base_bdevs_list": [ 00:09:14.863 { 00:09:14.863 "name": "BaseBdev1", 00:09:14.863 "uuid": "fc7885dd-6da3-4abe-9cc0-b8b0b9e0191e", 00:09:14.863 "is_configured": true, 00:09:14.863 "data_offset": 0, 00:09:14.863 "data_size": 65536 00:09:14.863 }, 00:09:14.863 { 00:09:14.863 "name": "BaseBdev2", 00:09:14.863 "uuid": "71cfbd8c-c844-4a26-82f9-bf050976058d", 00:09:14.863 "is_configured": true, 00:09:14.863 "data_offset": 0, 00:09:14.863 "data_size": 65536 00:09:14.863 } 00:09:14.863 ] 00:09:14.863 } 00:09:14.863 } 00:09:14.863 }' 00:09:14.863 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.863 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:14.863 BaseBdev2' 00:09:14.863 11:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.863 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.864 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:14.864 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.864 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.123 [2024-11-05 11:25:14.141808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.123 [2024-11-05 11:25:14.141942] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.123 [2024-11-05 11:25:14.142029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.123 "name": "Existed_Raid", 00:09:15.123 "uuid": "8ad14b8c-29b9-4ab2-8de5-7bab638d2c07", 00:09:15.123 "strip_size_kb": 64, 00:09:15.123 "state": "offline", 00:09:15.123 "raid_level": "concat", 00:09:15.123 "superblock": false, 00:09:15.123 "num_base_bdevs": 2, 00:09:15.123 "num_base_bdevs_discovered": 1, 00:09:15.123 "num_base_bdevs_operational": 1, 00:09:15.123 "base_bdevs_list": [ 00:09:15.123 { 00:09:15.123 "name": null, 00:09:15.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.123 "is_configured": false, 00:09:15.123 "data_offset": 0, 00:09:15.123 "data_size": 65536 00:09:15.123 }, 00:09:15.123 { 00:09:15.123 "name": "BaseBdev2", 00:09:15.123 "uuid": "71cfbd8c-c844-4a26-82f9-bf050976058d", 00:09:15.123 "is_configured": true, 00:09:15.123 "data_offset": 0, 00:09:15.123 "data_size": 65536 00:09:15.123 } 00:09:15.123 ] 00:09:15.123 }' 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.123 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.693 [2024-11-05 11:25:14.740291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:15.693 [2024-11-05 11:25:14.740355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61835 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61835 ']' 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61835 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61835 00:09:15.693 killing process with pid 61835 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61835' 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61835 00:09:15.693 [2024-11-05 11:25:14.937933] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.693 11:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61835 00:09:15.693 [2024-11-05 11:25:14.956478] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:17.074 00:09:17.074 real 0m5.181s 00:09:17.074 user 0m7.433s 00:09:17.074 sys 0m0.816s 00:09:17.074 ************************************ 00:09:17.074 END TEST raid_state_function_test 00:09:17.074 ************************************ 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.074 11:25:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:17.074 11:25:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:17.074 11:25:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:17.074 11:25:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.074 ************************************ 00:09:17.074 START TEST raid_state_function_test_sb 00:09:17.074 ************************************ 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62088 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62088' 00:09:17.074 Process raid pid: 62088 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62088 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 62088 ']' 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:17.074 11:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.074 [2024-11-05 11:25:16.327260] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:09:17.074 [2024-11-05 11:25:16.327470] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.334 [2024-11-05 11:25:16.485297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.334 [2024-11-05 11:25:16.607527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.594 [2024-11-05 11:25:16.811773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.594 [2024-11-05 11:25:16.811910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.163 [2024-11-05 11:25:17.217969] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.163 [2024-11-05 11:25:17.218113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.163 [2024-11-05 11:25:17.218158] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.163 [2024-11-05 11:25:17.218184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.163 "name": "Existed_Raid", 00:09:18.163 "uuid": "c3f2ceec-018a-4ab4-86b7-239ce3d82e43", 00:09:18.163 "strip_size_kb": 64, 00:09:18.163 "state": "configuring", 00:09:18.163 "raid_level": "concat", 00:09:18.163 "superblock": true, 00:09:18.163 "num_base_bdevs": 2, 00:09:18.163 "num_base_bdevs_discovered": 0, 00:09:18.163 "num_base_bdevs_operational": 2, 00:09:18.163 "base_bdevs_list": [ 00:09:18.163 { 00:09:18.163 "name": "BaseBdev1", 00:09:18.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.163 "is_configured": false, 00:09:18.163 "data_offset": 0, 00:09:18.163 "data_size": 0 00:09:18.163 }, 00:09:18.163 { 00:09:18.163 "name": "BaseBdev2", 00:09:18.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.163 "is_configured": false, 00:09:18.163 "data_offset": 0, 00:09:18.163 "data_size": 0 00:09:18.163 } 00:09:18.163 ] 00:09:18.163 }' 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.163 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.422 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.422 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.422 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.422 [2024-11-05 11:25:17.673174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.422 [2024-11-05 11:25:17.673239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:18.422 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.422 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:18.423 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.423 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.423 [2024-11-05 11:25:17.685184] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.423 [2024-11-05 11:25:17.685248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.423 [2024-11-05 11:25:17.685258] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.423 [2024-11-05 11:25:17.685269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.423 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.423 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.423 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.423 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.682 [2024-11-05 11:25:17.732464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.683 BaseBdev1 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.683 [ 00:09:18.683 { 00:09:18.683 "name": "BaseBdev1", 00:09:18.683 "aliases": [ 00:09:18.683 "1945469b-6415-4503-b8d2-65e7a58f2b68" 00:09:18.683 ], 00:09:18.683 "product_name": "Malloc disk", 00:09:18.683 "block_size": 512, 00:09:18.683 "num_blocks": 65536, 00:09:18.683 "uuid": "1945469b-6415-4503-b8d2-65e7a58f2b68", 00:09:18.683 "assigned_rate_limits": { 00:09:18.683 "rw_ios_per_sec": 0, 00:09:18.683 "rw_mbytes_per_sec": 0, 00:09:18.683 "r_mbytes_per_sec": 0, 00:09:18.683 "w_mbytes_per_sec": 0 00:09:18.683 }, 00:09:18.683 "claimed": true, 00:09:18.683 "claim_type": "exclusive_write", 00:09:18.683 "zoned": false, 00:09:18.683 "supported_io_types": { 00:09:18.683 "read": true, 00:09:18.683 "write": true, 00:09:18.683 "unmap": true, 00:09:18.683 "flush": true, 00:09:18.683 "reset": true, 00:09:18.683 "nvme_admin": false, 00:09:18.683 "nvme_io": false, 00:09:18.683 "nvme_io_md": false, 00:09:18.683 "write_zeroes": true, 00:09:18.683 "zcopy": true, 00:09:18.683 "get_zone_info": false, 00:09:18.683 "zone_management": false, 00:09:18.683 "zone_append": false, 00:09:18.683 "compare": false, 00:09:18.683 "compare_and_write": false, 00:09:18.683 "abort": true, 00:09:18.683 "seek_hole": false, 00:09:18.683 "seek_data": false, 00:09:18.683 "copy": true, 00:09:18.683 "nvme_iov_md": false 00:09:18.683 }, 00:09:18.683 "memory_domains": [ 00:09:18.683 { 00:09:18.683 "dma_device_id": "system", 00:09:18.683 "dma_device_type": 1 00:09:18.683 }, 00:09:18.683 { 00:09:18.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.683 "dma_device_type": 2 00:09:18.683 } 00:09:18.683 ], 00:09:18.683 "driver_specific": {} 00:09:18.683 } 00:09:18.683 ] 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.683 "name": "Existed_Raid", 00:09:18.683 "uuid": "1d7cd8d6-0438-4b8b-ba93-051f87fc5572", 00:09:18.683 "strip_size_kb": 64, 00:09:18.683 "state": "configuring", 00:09:18.683 "raid_level": "concat", 00:09:18.683 "superblock": true, 00:09:18.683 "num_base_bdevs": 2, 00:09:18.683 "num_base_bdevs_discovered": 1, 00:09:18.683 "num_base_bdevs_operational": 2, 00:09:18.683 "base_bdevs_list": [ 00:09:18.683 { 00:09:18.683 "name": "BaseBdev1", 00:09:18.683 "uuid": "1945469b-6415-4503-b8d2-65e7a58f2b68", 00:09:18.683 "is_configured": true, 00:09:18.683 "data_offset": 2048, 00:09:18.683 "data_size": 63488 00:09:18.683 }, 00:09:18.683 { 00:09:18.683 "name": "BaseBdev2", 00:09:18.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.683 "is_configured": false, 00:09:18.683 "data_offset": 0, 00:09:18.683 "data_size": 0 00:09:18.683 } 00:09:18.683 ] 00:09:18.683 }' 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.683 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.943 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.943 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.943 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.202 [2024-11-05 11:25:18.219759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.202 [2024-11-05 11:25:18.219834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.202 [2024-11-05 11:25:18.227834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.202 [2024-11-05 11:25:18.229778] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.202 [2024-11-05 11:25:18.229864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.202 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.202 "name": "Existed_Raid", 00:09:19.202 "uuid": "5bd76625-6c4a-4a1c-96ab-d4b2d7d0ce38", 00:09:19.202 "strip_size_kb": 64, 00:09:19.202 "state": "configuring", 00:09:19.202 "raid_level": "concat", 00:09:19.202 "superblock": true, 00:09:19.202 "num_base_bdevs": 2, 00:09:19.203 "num_base_bdevs_discovered": 1, 00:09:19.203 "num_base_bdevs_operational": 2, 00:09:19.203 "base_bdevs_list": [ 00:09:19.203 { 00:09:19.203 "name": "BaseBdev1", 00:09:19.203 "uuid": "1945469b-6415-4503-b8d2-65e7a58f2b68", 00:09:19.203 "is_configured": true, 00:09:19.203 "data_offset": 2048, 00:09:19.203 "data_size": 63488 00:09:19.203 }, 00:09:19.203 { 00:09:19.203 "name": "BaseBdev2", 00:09:19.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.203 "is_configured": false, 00:09:19.203 "data_offset": 0, 00:09:19.203 "data_size": 0 00:09:19.203 } 00:09:19.203 ] 00:09:19.203 }' 00:09:19.203 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.203 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.462 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:19.462 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.462 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.462 [2024-11-05 11:25:18.725186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.462 [2024-11-05 11:25:18.725463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:19.462 [2024-11-05 11:25:18.725496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:19.462 [2024-11-05 11:25:18.725751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:19.462 [2024-11-05 11:25:18.725913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:19.462 [2024-11-05 11:25:18.725926] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:19.462 BaseBdev2 00:09:19.462 [2024-11-05 11:25:18.726071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.462 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.462 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:19.462 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:19.462 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:19.462 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:19.462 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:19.462 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:19.462 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:19.462 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.463 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.722 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.722 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.723 [ 00:09:19.723 { 00:09:19.723 "name": "BaseBdev2", 00:09:19.723 "aliases": [ 00:09:19.723 "13b88f2b-817a-4129-aa2a-f132ebd70307" 00:09:19.723 ], 00:09:19.723 "product_name": "Malloc disk", 00:09:19.723 "block_size": 512, 00:09:19.723 "num_blocks": 65536, 00:09:19.723 "uuid": "13b88f2b-817a-4129-aa2a-f132ebd70307", 00:09:19.723 "assigned_rate_limits": { 00:09:19.723 "rw_ios_per_sec": 0, 00:09:19.723 "rw_mbytes_per_sec": 0, 00:09:19.723 "r_mbytes_per_sec": 0, 00:09:19.723 "w_mbytes_per_sec": 0 00:09:19.723 }, 00:09:19.723 "claimed": true, 00:09:19.723 "claim_type": "exclusive_write", 00:09:19.723 "zoned": false, 00:09:19.723 "supported_io_types": { 00:09:19.723 "read": true, 00:09:19.723 "write": true, 00:09:19.723 "unmap": true, 00:09:19.723 "flush": true, 00:09:19.723 "reset": true, 00:09:19.723 "nvme_admin": false, 00:09:19.723 "nvme_io": false, 00:09:19.723 "nvme_io_md": false, 00:09:19.723 "write_zeroes": true, 00:09:19.723 "zcopy": true, 00:09:19.723 "get_zone_info": false, 00:09:19.723 "zone_management": false, 00:09:19.723 "zone_append": false, 00:09:19.723 "compare": false, 00:09:19.723 "compare_and_write": false, 00:09:19.723 "abort": true, 00:09:19.723 "seek_hole": false, 00:09:19.723 "seek_data": false, 00:09:19.723 "copy": true, 00:09:19.723 "nvme_iov_md": false 00:09:19.723 }, 00:09:19.723 "memory_domains": [ 00:09:19.723 { 00:09:19.723 "dma_device_id": "system", 00:09:19.723 "dma_device_type": 1 00:09:19.723 }, 00:09:19.723 { 00:09:19.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.723 "dma_device_type": 2 00:09:19.723 } 00:09:19.723 ], 00:09:19.723 "driver_specific": {} 00:09:19.723 } 00:09:19.723 ] 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.723 "name": "Existed_Raid", 00:09:19.723 "uuid": "5bd76625-6c4a-4a1c-96ab-d4b2d7d0ce38", 00:09:19.723 "strip_size_kb": 64, 00:09:19.723 "state": "online", 00:09:19.723 "raid_level": "concat", 00:09:19.723 "superblock": true, 00:09:19.723 "num_base_bdevs": 2, 00:09:19.723 "num_base_bdevs_discovered": 2, 00:09:19.723 "num_base_bdevs_operational": 2, 00:09:19.723 "base_bdevs_list": [ 00:09:19.723 { 00:09:19.723 "name": "BaseBdev1", 00:09:19.723 "uuid": "1945469b-6415-4503-b8d2-65e7a58f2b68", 00:09:19.723 "is_configured": true, 00:09:19.723 "data_offset": 2048, 00:09:19.723 "data_size": 63488 00:09:19.723 }, 00:09:19.723 { 00:09:19.723 "name": "BaseBdev2", 00:09:19.723 "uuid": "13b88f2b-817a-4129-aa2a-f132ebd70307", 00:09:19.723 "is_configured": true, 00:09:19.723 "data_offset": 2048, 00:09:19.723 "data_size": 63488 00:09:19.723 } 00:09:19.723 ] 00:09:19.723 }' 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.723 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.982 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:19.982 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:19.982 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.982 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.982 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.982 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.982 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.982 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:19.982 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.982 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.982 [2024-11-05 11:25:19.224649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.982 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.982 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:19.982 "name": "Existed_Raid", 00:09:19.982 "aliases": [ 00:09:19.982 "5bd76625-6c4a-4a1c-96ab-d4b2d7d0ce38" 00:09:19.982 ], 00:09:19.982 "product_name": "Raid Volume", 00:09:19.982 "block_size": 512, 00:09:19.982 "num_blocks": 126976, 00:09:19.982 "uuid": "5bd76625-6c4a-4a1c-96ab-d4b2d7d0ce38", 00:09:19.982 "assigned_rate_limits": { 00:09:19.982 "rw_ios_per_sec": 0, 00:09:19.982 "rw_mbytes_per_sec": 0, 00:09:19.982 "r_mbytes_per_sec": 0, 00:09:19.982 "w_mbytes_per_sec": 0 00:09:19.982 }, 00:09:19.982 "claimed": false, 00:09:19.982 "zoned": false, 00:09:19.982 "supported_io_types": { 00:09:19.982 "read": true, 00:09:19.982 "write": true, 00:09:19.982 "unmap": true, 00:09:19.982 "flush": true, 00:09:19.982 "reset": true, 00:09:19.982 "nvme_admin": false, 00:09:19.982 "nvme_io": false, 00:09:19.982 "nvme_io_md": false, 00:09:19.982 "write_zeroes": true, 00:09:19.982 "zcopy": false, 00:09:19.982 "get_zone_info": false, 00:09:19.982 "zone_management": false, 00:09:19.982 "zone_append": false, 00:09:19.982 "compare": false, 00:09:19.982 "compare_and_write": false, 00:09:19.982 "abort": false, 00:09:19.982 "seek_hole": false, 00:09:19.982 "seek_data": false, 00:09:19.982 "copy": false, 00:09:19.982 "nvme_iov_md": false 00:09:19.982 }, 00:09:19.982 "memory_domains": [ 00:09:19.983 { 00:09:19.983 "dma_device_id": "system", 00:09:19.983 "dma_device_type": 1 00:09:19.983 }, 00:09:19.983 { 00:09:19.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.983 "dma_device_type": 2 00:09:19.983 }, 00:09:19.983 { 00:09:19.983 "dma_device_id": "system", 00:09:19.983 "dma_device_type": 1 00:09:19.983 }, 00:09:19.983 { 00:09:19.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.983 "dma_device_type": 2 00:09:19.983 } 00:09:19.983 ], 00:09:19.983 "driver_specific": { 00:09:19.983 "raid": { 00:09:19.983 "uuid": "5bd76625-6c4a-4a1c-96ab-d4b2d7d0ce38", 00:09:19.983 "strip_size_kb": 64, 00:09:19.983 "state": "online", 00:09:19.983 "raid_level": "concat", 00:09:19.983 "superblock": true, 00:09:19.983 "num_base_bdevs": 2, 00:09:19.983 "num_base_bdevs_discovered": 2, 00:09:19.983 "num_base_bdevs_operational": 2, 00:09:19.983 "base_bdevs_list": [ 00:09:19.983 { 00:09:19.983 "name": "BaseBdev1", 00:09:19.983 "uuid": "1945469b-6415-4503-b8d2-65e7a58f2b68", 00:09:19.983 "is_configured": true, 00:09:19.983 "data_offset": 2048, 00:09:19.983 "data_size": 63488 00:09:19.983 }, 00:09:19.983 { 00:09:19.983 "name": "BaseBdev2", 00:09:19.983 "uuid": "13b88f2b-817a-4129-aa2a-f132ebd70307", 00:09:19.983 "is_configured": true, 00:09:19.983 "data_offset": 2048, 00:09:19.983 "data_size": 63488 00:09:19.983 } 00:09:19.983 ] 00:09:19.983 } 00:09:19.983 } 00:09:19.983 }' 00:09:19.983 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:20.242 BaseBdev2' 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.242 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.242 [2024-11-05 11:25:19.420065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.242 [2024-11-05 11:25:19.420104] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.242 [2024-11-05 11:25:19.420164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.504 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.504 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:20.504 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:20.504 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.504 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.504 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:20.504 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:20.504 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.504 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:20.505 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.505 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.505 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:20.505 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.505 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.505 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.505 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.505 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.505 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.505 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.505 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.505 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.505 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.505 "name": "Existed_Raid", 00:09:20.505 "uuid": "5bd76625-6c4a-4a1c-96ab-d4b2d7d0ce38", 00:09:20.505 "strip_size_kb": 64, 00:09:20.505 "state": "offline", 00:09:20.505 "raid_level": "concat", 00:09:20.505 "superblock": true, 00:09:20.505 "num_base_bdevs": 2, 00:09:20.505 "num_base_bdevs_discovered": 1, 00:09:20.505 "num_base_bdevs_operational": 1, 00:09:20.505 "base_bdevs_list": [ 00:09:20.505 { 00:09:20.505 "name": null, 00:09:20.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.505 "is_configured": false, 00:09:20.505 "data_offset": 0, 00:09:20.505 "data_size": 63488 00:09:20.505 }, 00:09:20.505 { 00:09:20.505 "name": "BaseBdev2", 00:09:20.505 "uuid": "13b88f2b-817a-4129-aa2a-f132ebd70307", 00:09:20.505 "is_configured": true, 00:09:20.505 "data_offset": 2048, 00:09:20.505 "data_size": 63488 00:09:20.505 } 00:09:20.505 ] 00:09:20.505 }' 00:09:20.505 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.505 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.765 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:20.765 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.765 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:20.765 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.765 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.765 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.765 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.765 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:20.765 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.765 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:20.765 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.765 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.765 [2024-11-05 11:25:19.947888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:20.765 [2024-11-05 11:25:19.948053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62088 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 62088 ']' 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 62088 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62088 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62088' 00:09:21.023 killing process with pid 62088 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 62088 00:09:21.023 [2024-11-05 11:25:20.123903] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.023 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 62088 00:09:21.023 [2024-11-05 11:25:20.140830] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.403 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:22.403 00:09:22.403 real 0m5.056s 00:09:22.403 user 0m7.263s 00:09:22.403 sys 0m0.831s 00:09:22.403 11:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:22.403 ************************************ 00:09:22.403 END TEST raid_state_function_test_sb 00:09:22.403 ************************************ 00:09:22.403 11:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.403 11:25:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:22.403 11:25:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:22.403 11:25:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:22.403 11:25:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:22.403 ************************************ 00:09:22.403 START TEST raid_superblock_test 00:09:22.403 ************************************ 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62340 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62340 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 62340 ']' 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:22.403 11:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.403 [2024-11-05 11:25:21.454811] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:09:22.403 [2024-11-05 11:25:21.454960] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62340 ] 00:09:22.403 [2024-11-05 11:25:21.624749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.663 [2024-11-05 11:25:21.738666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.923 [2024-11-05 11:25:21.943591] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.923 [2024-11-05 11:25:21.943665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.183 malloc1 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.183 [2024-11-05 11:25:22.398578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:23.183 [2024-11-05 11:25:22.398757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.183 [2024-11-05 11:25:22.398801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:23.183 [2024-11-05 11:25:22.398832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.183 [2024-11-05 11:25:22.401050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.183 [2024-11-05 11:25:22.401138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:23.183 pt1 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.183 malloc2 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.183 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.443 [2024-11-05 11:25:22.460819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:23.443 [2024-11-05 11:25:22.460981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.443 [2024-11-05 11:25:22.461023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:23.443 [2024-11-05 11:25:22.461035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.443 [2024-11-05 11:25:22.463257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.443 [2024-11-05 11:25:22.463302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:23.443 pt2 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.443 [2024-11-05 11:25:22.472856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:23.443 [2024-11-05 11:25:22.474629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:23.443 [2024-11-05 11:25:22.474883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:23.443 [2024-11-05 11:25:22.474901] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:23.443 [2024-11-05 11:25:22.475199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:23.443 [2024-11-05 11:25:22.475357] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:23.443 [2024-11-05 11:25:22.475369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:23.443 [2024-11-05 11:25:22.475545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.443 "name": "raid_bdev1", 00:09:23.443 "uuid": "aee17212-7bc1-40ef-8543-1681f02f47dc", 00:09:23.443 "strip_size_kb": 64, 00:09:23.443 "state": "online", 00:09:23.443 "raid_level": "concat", 00:09:23.443 "superblock": true, 00:09:23.443 "num_base_bdevs": 2, 00:09:23.443 "num_base_bdevs_discovered": 2, 00:09:23.443 "num_base_bdevs_operational": 2, 00:09:23.443 "base_bdevs_list": [ 00:09:23.443 { 00:09:23.443 "name": "pt1", 00:09:23.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.443 "is_configured": true, 00:09:23.443 "data_offset": 2048, 00:09:23.443 "data_size": 63488 00:09:23.443 }, 00:09:23.443 { 00:09:23.443 "name": "pt2", 00:09:23.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.443 "is_configured": true, 00:09:23.443 "data_offset": 2048, 00:09:23.443 "data_size": 63488 00:09:23.443 } 00:09:23.443 ] 00:09:23.443 }' 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.443 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.705 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:23.705 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:23.705 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:23.705 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:23.705 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:23.705 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:23.705 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:23.705 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:23.705 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.705 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.705 [2024-11-05 11:25:22.964282] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.705 11:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.967 11:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:23.967 "name": "raid_bdev1", 00:09:23.967 "aliases": [ 00:09:23.967 "aee17212-7bc1-40ef-8543-1681f02f47dc" 00:09:23.967 ], 00:09:23.967 "product_name": "Raid Volume", 00:09:23.967 "block_size": 512, 00:09:23.967 "num_blocks": 126976, 00:09:23.967 "uuid": "aee17212-7bc1-40ef-8543-1681f02f47dc", 00:09:23.967 "assigned_rate_limits": { 00:09:23.967 "rw_ios_per_sec": 0, 00:09:23.967 "rw_mbytes_per_sec": 0, 00:09:23.967 "r_mbytes_per_sec": 0, 00:09:23.967 "w_mbytes_per_sec": 0 00:09:23.967 }, 00:09:23.967 "claimed": false, 00:09:23.967 "zoned": false, 00:09:23.967 "supported_io_types": { 00:09:23.967 "read": true, 00:09:23.967 "write": true, 00:09:23.967 "unmap": true, 00:09:23.967 "flush": true, 00:09:23.967 "reset": true, 00:09:23.967 "nvme_admin": false, 00:09:23.967 "nvme_io": false, 00:09:23.967 "nvme_io_md": false, 00:09:23.967 "write_zeroes": true, 00:09:23.967 "zcopy": false, 00:09:23.967 "get_zone_info": false, 00:09:23.967 "zone_management": false, 00:09:23.967 "zone_append": false, 00:09:23.967 "compare": false, 00:09:23.967 "compare_and_write": false, 00:09:23.967 "abort": false, 00:09:23.967 "seek_hole": false, 00:09:23.967 "seek_data": false, 00:09:23.967 "copy": false, 00:09:23.967 "nvme_iov_md": false 00:09:23.967 }, 00:09:23.967 "memory_domains": [ 00:09:23.967 { 00:09:23.967 "dma_device_id": "system", 00:09:23.967 "dma_device_type": 1 00:09:23.967 }, 00:09:23.967 { 00:09:23.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.967 "dma_device_type": 2 00:09:23.967 }, 00:09:23.967 { 00:09:23.967 "dma_device_id": "system", 00:09:23.967 "dma_device_type": 1 00:09:23.967 }, 00:09:23.967 { 00:09:23.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.967 "dma_device_type": 2 00:09:23.967 } 00:09:23.967 ], 00:09:23.967 "driver_specific": { 00:09:23.967 "raid": { 00:09:23.967 "uuid": "aee17212-7bc1-40ef-8543-1681f02f47dc", 00:09:23.967 "strip_size_kb": 64, 00:09:23.967 "state": "online", 00:09:23.967 "raid_level": "concat", 00:09:23.967 "superblock": true, 00:09:23.967 "num_base_bdevs": 2, 00:09:23.967 "num_base_bdevs_discovered": 2, 00:09:23.967 "num_base_bdevs_operational": 2, 00:09:23.967 "base_bdevs_list": [ 00:09:23.967 { 00:09:23.967 "name": "pt1", 00:09:23.967 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.967 "is_configured": true, 00:09:23.967 "data_offset": 2048, 00:09:23.967 "data_size": 63488 00:09:23.967 }, 00:09:23.967 { 00:09:23.967 "name": "pt2", 00:09:23.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.967 "is_configured": true, 00:09:23.967 "data_offset": 2048, 00:09:23.967 "data_size": 63488 00:09:23.967 } 00:09:23.967 ] 00:09:23.967 } 00:09:23.967 } 00:09:23.967 }' 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:23.967 pt2' 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:23.967 [2024-11-05 11:25:23.187813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=aee17212-7bc1-40ef-8543-1681f02f47dc 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z aee17212-7bc1-40ef-8543-1681f02f47dc ']' 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.967 [2024-11-05 11:25:23.231477] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:23.967 [2024-11-05 11:25:23.231502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.967 [2024-11-05 11:25:23.231577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.967 [2024-11-05 11:25:23.231626] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.967 [2024-11-05 11:25:23.231638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.967 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.227 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.228 [2024-11-05 11:25:23.371287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:24.228 [2024-11-05 11:25:23.373211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:24.228 [2024-11-05 11:25:23.373319] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:24.228 [2024-11-05 11:25:23.373413] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:24.228 [2024-11-05 11:25:23.373464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:24.228 [2024-11-05 11:25:23.373493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:24.228 request: 00:09:24.228 { 00:09:24.228 "name": "raid_bdev1", 00:09:24.228 "raid_level": "concat", 00:09:24.228 "base_bdevs": [ 00:09:24.228 "malloc1", 00:09:24.228 "malloc2" 00:09:24.228 ], 00:09:24.228 "strip_size_kb": 64, 00:09:24.228 "superblock": false, 00:09:24.228 "method": "bdev_raid_create", 00:09:24.228 "req_id": 1 00:09:24.228 } 00:09:24.228 Got JSON-RPC error response 00:09:24.228 response: 00:09:24.228 { 00:09:24.228 "code": -17, 00:09:24.228 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:24.228 } 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.228 [2024-11-05 11:25:23.439205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:24.228 [2024-11-05 11:25:23.439260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.228 [2024-11-05 11:25:23.439278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:24.228 [2024-11-05 11:25:23.439290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.228 [2024-11-05 11:25:23.441408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.228 [2024-11-05 11:25:23.441448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:24.228 [2024-11-05 11:25:23.441521] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:24.228 [2024-11-05 11:25:23.441574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:24.228 pt1 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.228 "name": "raid_bdev1", 00:09:24.228 "uuid": "aee17212-7bc1-40ef-8543-1681f02f47dc", 00:09:24.228 "strip_size_kb": 64, 00:09:24.228 "state": "configuring", 00:09:24.228 "raid_level": "concat", 00:09:24.228 "superblock": true, 00:09:24.228 "num_base_bdevs": 2, 00:09:24.228 "num_base_bdevs_discovered": 1, 00:09:24.228 "num_base_bdevs_operational": 2, 00:09:24.228 "base_bdevs_list": [ 00:09:24.228 { 00:09:24.228 "name": "pt1", 00:09:24.228 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.228 "is_configured": true, 00:09:24.228 "data_offset": 2048, 00:09:24.228 "data_size": 63488 00:09:24.228 }, 00:09:24.228 { 00:09:24.228 "name": null, 00:09:24.228 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.228 "is_configured": false, 00:09:24.228 "data_offset": 2048, 00:09:24.228 "data_size": 63488 00:09:24.228 } 00:09:24.228 ] 00:09:24.228 }' 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.228 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.798 [2024-11-05 11:25:23.882513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:24.798 [2024-11-05 11:25:23.882716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.798 [2024-11-05 11:25:23.882755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:24.798 [2024-11-05 11:25:23.882790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.798 [2024-11-05 11:25:23.883312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.798 [2024-11-05 11:25:23.883380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:24.798 [2024-11-05 11:25:23.883500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:24.798 [2024-11-05 11:25:23.883556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:24.798 [2024-11-05 11:25:23.883705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:24.798 [2024-11-05 11:25:23.883746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:24.798 [2024-11-05 11:25:23.884007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:24.798 [2024-11-05 11:25:23.884203] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:24.798 [2024-11-05 11:25:23.884247] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:24.798 [2024-11-05 11:25:23.884416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.798 pt2 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.798 "name": "raid_bdev1", 00:09:24.798 "uuid": "aee17212-7bc1-40ef-8543-1681f02f47dc", 00:09:24.798 "strip_size_kb": 64, 00:09:24.798 "state": "online", 00:09:24.798 "raid_level": "concat", 00:09:24.798 "superblock": true, 00:09:24.798 "num_base_bdevs": 2, 00:09:24.798 "num_base_bdevs_discovered": 2, 00:09:24.798 "num_base_bdevs_operational": 2, 00:09:24.798 "base_bdevs_list": [ 00:09:24.798 { 00:09:24.798 "name": "pt1", 00:09:24.798 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.798 "is_configured": true, 00:09:24.798 "data_offset": 2048, 00:09:24.798 "data_size": 63488 00:09:24.798 }, 00:09:24.798 { 00:09:24.798 "name": "pt2", 00:09:24.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.798 "is_configured": true, 00:09:24.798 "data_offset": 2048, 00:09:24.798 "data_size": 63488 00:09:24.798 } 00:09:24.798 ] 00:09:24.798 }' 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.798 11:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.058 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:25.058 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:25.058 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.058 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.058 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.058 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.058 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.058 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.058 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.058 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.058 [2024-11-05 11:25:24.325969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.318 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.318 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.318 "name": "raid_bdev1", 00:09:25.318 "aliases": [ 00:09:25.318 "aee17212-7bc1-40ef-8543-1681f02f47dc" 00:09:25.318 ], 00:09:25.318 "product_name": "Raid Volume", 00:09:25.318 "block_size": 512, 00:09:25.318 "num_blocks": 126976, 00:09:25.318 "uuid": "aee17212-7bc1-40ef-8543-1681f02f47dc", 00:09:25.318 "assigned_rate_limits": { 00:09:25.318 "rw_ios_per_sec": 0, 00:09:25.318 "rw_mbytes_per_sec": 0, 00:09:25.318 "r_mbytes_per_sec": 0, 00:09:25.318 "w_mbytes_per_sec": 0 00:09:25.318 }, 00:09:25.318 "claimed": false, 00:09:25.318 "zoned": false, 00:09:25.318 "supported_io_types": { 00:09:25.318 "read": true, 00:09:25.318 "write": true, 00:09:25.318 "unmap": true, 00:09:25.318 "flush": true, 00:09:25.318 "reset": true, 00:09:25.318 "nvme_admin": false, 00:09:25.318 "nvme_io": false, 00:09:25.318 "nvme_io_md": false, 00:09:25.318 "write_zeroes": true, 00:09:25.318 "zcopy": false, 00:09:25.318 "get_zone_info": false, 00:09:25.318 "zone_management": false, 00:09:25.318 "zone_append": false, 00:09:25.318 "compare": false, 00:09:25.318 "compare_and_write": false, 00:09:25.318 "abort": false, 00:09:25.318 "seek_hole": false, 00:09:25.318 "seek_data": false, 00:09:25.318 "copy": false, 00:09:25.318 "nvme_iov_md": false 00:09:25.318 }, 00:09:25.318 "memory_domains": [ 00:09:25.318 { 00:09:25.318 "dma_device_id": "system", 00:09:25.318 "dma_device_type": 1 00:09:25.318 }, 00:09:25.318 { 00:09:25.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.318 "dma_device_type": 2 00:09:25.318 }, 00:09:25.318 { 00:09:25.318 "dma_device_id": "system", 00:09:25.318 "dma_device_type": 1 00:09:25.318 }, 00:09:25.318 { 00:09:25.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.318 "dma_device_type": 2 00:09:25.318 } 00:09:25.318 ], 00:09:25.318 "driver_specific": { 00:09:25.318 "raid": { 00:09:25.318 "uuid": "aee17212-7bc1-40ef-8543-1681f02f47dc", 00:09:25.318 "strip_size_kb": 64, 00:09:25.318 "state": "online", 00:09:25.318 "raid_level": "concat", 00:09:25.318 "superblock": true, 00:09:25.318 "num_base_bdevs": 2, 00:09:25.318 "num_base_bdevs_discovered": 2, 00:09:25.318 "num_base_bdevs_operational": 2, 00:09:25.318 "base_bdevs_list": [ 00:09:25.318 { 00:09:25.318 "name": "pt1", 00:09:25.318 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.318 "is_configured": true, 00:09:25.318 "data_offset": 2048, 00:09:25.318 "data_size": 63488 00:09:25.318 }, 00:09:25.318 { 00:09:25.318 "name": "pt2", 00:09:25.318 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.318 "is_configured": true, 00:09:25.318 "data_offset": 2048, 00:09:25.318 "data_size": 63488 00:09:25.318 } 00:09:25.318 ] 00:09:25.318 } 00:09:25.318 } 00:09:25.318 }' 00:09:25.318 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.318 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:25.318 pt2' 00:09:25.318 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.318 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.318 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.318 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.319 [2024-11-05 11:25:24.553536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' aee17212-7bc1-40ef-8543-1681f02f47dc '!=' aee17212-7bc1-40ef-8543-1681f02f47dc ']' 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62340 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 62340 ']' 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 62340 00:09:25.319 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:25.579 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:25.579 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62340 00:09:25.579 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:25.579 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:25.579 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62340' 00:09:25.579 killing process with pid 62340 00:09:25.579 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 62340 00:09:25.579 [2024-11-05 11:25:24.619844] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.579 [2024-11-05 11:25:24.619991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.579 11:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 62340 00:09:25.579 [2024-11-05 11:25:24.620070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.579 [2024-11-05 11:25:24.620085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:25.579 [2024-11-05 11:25:24.823038] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.959 11:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:26.959 00:09:26.959 real 0m4.572s 00:09:26.959 user 0m6.465s 00:09:26.959 sys 0m0.712s 00:09:26.959 11:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.959 11:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.959 ************************************ 00:09:26.959 END TEST raid_superblock_test 00:09:26.959 ************************************ 00:09:26.959 11:25:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:26.959 11:25:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:26.959 11:25:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.959 11:25:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.959 ************************************ 00:09:26.959 START TEST raid_read_error_test 00:09:26.959 ************************************ 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:26.959 11:25:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2t5OCayXgh 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62546 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62546 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62546 ']' 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:26.959 11:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.959 [2024-11-05 11:25:26.098753] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:09:26.959 [2024-11-05 11:25:26.098944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62546 ] 00:09:27.219 [2024-11-05 11:25:26.273759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.219 [2024-11-05 11:25:26.387510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.478 [2024-11-05 11:25:26.593687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.478 [2024-11-05 11:25:26.593826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.736 11:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:27.736 11:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:27.737 11:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.737 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:27.737 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.737 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.996 BaseBdev1_malloc 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.996 true 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.996 [2024-11-05 11:25:27.060160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:27.996 [2024-11-05 11:25:27.060265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.996 [2024-11-05 11:25:27.060301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:27.996 [2024-11-05 11:25:27.060312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.996 [2024-11-05 11:25:27.062414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.996 [2024-11-05 11:25:27.062456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:27.996 BaseBdev1 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.996 BaseBdev2_malloc 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.996 true 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.996 [2024-11-05 11:25:27.124707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:27.996 [2024-11-05 11:25:27.124758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.996 [2024-11-05 11:25:27.124774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:27.996 [2024-11-05 11:25:27.124785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.996 [2024-11-05 11:25:27.126867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.996 [2024-11-05 11:25:27.126952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:27.996 BaseBdev2 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.996 [2024-11-05 11:25:27.136755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.996 [2024-11-05 11:25:27.138612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.996 [2024-11-05 11:25:27.138793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:27.996 [2024-11-05 11:25:27.138808] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:27.996 [2024-11-05 11:25:27.139035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:27.996 [2024-11-05 11:25:27.139223] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:27.996 [2024-11-05 11:25:27.139237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:27.996 [2024-11-05 11:25:27.139402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.996 "name": "raid_bdev1", 00:09:27.996 "uuid": "3f8c58dd-9a35-46b6-bc7e-8c21cec511cf", 00:09:27.996 "strip_size_kb": 64, 00:09:27.996 "state": "online", 00:09:27.996 "raid_level": "concat", 00:09:27.996 "superblock": true, 00:09:27.996 "num_base_bdevs": 2, 00:09:27.996 "num_base_bdevs_discovered": 2, 00:09:27.996 "num_base_bdevs_operational": 2, 00:09:27.996 "base_bdevs_list": [ 00:09:27.996 { 00:09:27.996 "name": "BaseBdev1", 00:09:27.996 "uuid": "919140ce-c71b-58f0-bbfe-d8b487eb0b34", 00:09:27.996 "is_configured": true, 00:09:27.996 "data_offset": 2048, 00:09:27.996 "data_size": 63488 00:09:27.996 }, 00:09:27.996 { 00:09:27.996 "name": "BaseBdev2", 00:09:27.996 "uuid": "75c44f2e-1f83-557f-8c6d-375df99a2b98", 00:09:27.996 "is_configured": true, 00:09:27.996 "data_offset": 2048, 00:09:27.996 "data_size": 63488 00:09:27.996 } 00:09:27.996 ] 00:09:27.996 }' 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.996 11:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.565 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:28.565 11:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:28.565 [2024-11-05 11:25:27.705114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:29.502 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.503 "name": "raid_bdev1", 00:09:29.503 "uuid": "3f8c58dd-9a35-46b6-bc7e-8c21cec511cf", 00:09:29.503 "strip_size_kb": 64, 00:09:29.503 "state": "online", 00:09:29.503 "raid_level": "concat", 00:09:29.503 "superblock": true, 00:09:29.503 "num_base_bdevs": 2, 00:09:29.503 "num_base_bdevs_discovered": 2, 00:09:29.503 "num_base_bdevs_operational": 2, 00:09:29.503 "base_bdevs_list": [ 00:09:29.503 { 00:09:29.503 "name": "BaseBdev1", 00:09:29.503 "uuid": "919140ce-c71b-58f0-bbfe-d8b487eb0b34", 00:09:29.503 "is_configured": true, 00:09:29.503 "data_offset": 2048, 00:09:29.503 "data_size": 63488 00:09:29.503 }, 00:09:29.503 { 00:09:29.503 "name": "BaseBdev2", 00:09:29.503 "uuid": "75c44f2e-1f83-557f-8c6d-375df99a2b98", 00:09:29.503 "is_configured": true, 00:09:29.503 "data_offset": 2048, 00:09:29.503 "data_size": 63488 00:09:29.503 } 00:09:29.503 ] 00:09:29.503 }' 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.503 11:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.071 11:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:30.071 11:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.071 11:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.071 [2024-11-05 11:25:29.096924] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.071 [2024-11-05 11:25:29.096962] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.071 [2024-11-05 11:25:29.099508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.071 [2024-11-05 11:25:29.099550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.071 [2024-11-05 11:25:29.099581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.071 [2024-11-05 11:25:29.099595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:30.071 { 00:09:30.071 "results": [ 00:09:30.071 { 00:09:30.071 "job": "raid_bdev1", 00:09:30.071 "core_mask": "0x1", 00:09:30.071 "workload": "randrw", 00:09:30.071 "percentage": 50, 00:09:30.071 "status": "finished", 00:09:30.071 "queue_depth": 1, 00:09:30.071 "io_size": 131072, 00:09:30.071 "runtime": 1.392593, 00:09:30.071 "iops": 15710.979446256013, 00:09:30.071 "mibps": 1963.8724307820016, 00:09:30.071 "io_failed": 1, 00:09:30.071 "io_timeout": 0, 00:09:30.071 "avg_latency_us": 88.39607162530037, 00:09:30.071 "min_latency_us": 26.047161572052403, 00:09:30.071 "max_latency_us": 1473.844541484716 00:09:30.071 } 00:09:30.071 ], 00:09:30.071 "core_count": 1 00:09:30.071 } 00:09:30.071 11:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.071 11:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62546 00:09:30.071 11:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62546 ']' 00:09:30.071 11:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62546 00:09:30.071 11:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:30.071 11:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:30.071 11:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62546 00:09:30.071 11:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:30.071 11:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:30.071 11:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62546' 00:09:30.071 killing process with pid 62546 00:09:30.071 11:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62546 00:09:30.071 [2024-11-05 11:25:29.151844] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.071 11:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62546 00:09:30.071 [2024-11-05 11:25:29.292007] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.460 11:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:31.460 11:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2t5OCayXgh 00:09:31.460 11:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:31.460 11:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:31.460 11:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:31.460 ************************************ 00:09:31.460 END TEST raid_read_error_test 00:09:31.460 ************************************ 00:09:31.460 11:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:31.460 11:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:31.460 11:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:31.460 00:09:31.460 real 0m4.473s 00:09:31.460 user 0m5.452s 00:09:31.460 sys 0m0.571s 00:09:31.460 11:25:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:31.460 11:25:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.460 11:25:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:31.460 11:25:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:31.460 11:25:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:31.460 11:25:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.460 ************************************ 00:09:31.460 START TEST raid_write_error_test 00:09:31.460 ************************************ 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GWvLx38VEM 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62697 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62697 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62697 ']' 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:31.460 11:25:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.460 [2024-11-05 11:25:30.641135] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:09:31.460 [2024-11-05 11:25:30.641357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62697 ] 00:09:31.734 [2024-11-05 11:25:30.818151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.734 [2024-11-05 11:25:30.930375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.993 [2024-11-05 11:25:31.132437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.993 [2024-11-05 11:25:31.132497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.251 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:32.251 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:32.251 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:32.251 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:32.251 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.251 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.514 BaseBdev1_malloc 00:09:32.514 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.514 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:32.514 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.514 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.514 true 00:09:32.514 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.514 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:32.514 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.514 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.514 [2024-11-05 11:25:31.548062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:32.514 [2024-11-05 11:25:31.548120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.514 [2024-11-05 11:25:31.548153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:32.515 [2024-11-05 11:25:31.548165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.515 [2024-11-05 11:25:31.550176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.515 [2024-11-05 11:25:31.550255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:32.515 BaseBdev1 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.515 BaseBdev2_malloc 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.515 true 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.515 [2024-11-05 11:25:31.616979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:32.515 [2024-11-05 11:25:31.617033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.515 [2024-11-05 11:25:31.617050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:32.515 [2024-11-05 11:25:31.617061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.515 [2024-11-05 11:25:31.619113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.515 [2024-11-05 11:25:31.619163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:32.515 BaseBdev2 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.515 [2024-11-05 11:25:31.629019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.515 [2024-11-05 11:25:31.630807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.515 [2024-11-05 11:25:31.631058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:32.515 [2024-11-05 11:25:31.631079] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:32.515 [2024-11-05 11:25:31.631315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:32.515 [2024-11-05 11:25:31.631485] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:32.515 [2024-11-05 11:25:31.631498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:32.515 [2024-11-05 11:25:31.631645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.515 "name": "raid_bdev1", 00:09:32.515 "uuid": "9c94449b-e17f-4ce4-aee4-489fae02b341", 00:09:32.515 "strip_size_kb": 64, 00:09:32.515 "state": "online", 00:09:32.515 "raid_level": "concat", 00:09:32.515 "superblock": true, 00:09:32.515 "num_base_bdevs": 2, 00:09:32.515 "num_base_bdevs_discovered": 2, 00:09:32.515 "num_base_bdevs_operational": 2, 00:09:32.515 "base_bdevs_list": [ 00:09:32.515 { 00:09:32.515 "name": "BaseBdev1", 00:09:32.515 "uuid": "84f59e33-051a-5625-be5b-b56752035686", 00:09:32.515 "is_configured": true, 00:09:32.515 "data_offset": 2048, 00:09:32.515 "data_size": 63488 00:09:32.515 }, 00:09:32.515 { 00:09:32.515 "name": "BaseBdev2", 00:09:32.515 "uuid": "42b38c57-3cdf-5ba8-97e1-b51e0cd2c666", 00:09:32.515 "is_configured": true, 00:09:32.515 "data_offset": 2048, 00:09:32.515 "data_size": 63488 00:09:32.515 } 00:09:32.515 ] 00:09:32.515 }' 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.515 11:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.085 11:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:33.085 11:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:33.085 [2024-11-05 11:25:32.173542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.023 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.023 "name": "raid_bdev1", 00:09:34.023 "uuid": "9c94449b-e17f-4ce4-aee4-489fae02b341", 00:09:34.023 "strip_size_kb": 64, 00:09:34.023 "state": "online", 00:09:34.023 "raid_level": "concat", 00:09:34.023 "superblock": true, 00:09:34.023 "num_base_bdevs": 2, 00:09:34.023 "num_base_bdevs_discovered": 2, 00:09:34.024 "num_base_bdevs_operational": 2, 00:09:34.024 "base_bdevs_list": [ 00:09:34.024 { 00:09:34.024 "name": "BaseBdev1", 00:09:34.024 "uuid": "84f59e33-051a-5625-be5b-b56752035686", 00:09:34.024 "is_configured": true, 00:09:34.024 "data_offset": 2048, 00:09:34.024 "data_size": 63488 00:09:34.024 }, 00:09:34.024 { 00:09:34.024 "name": "BaseBdev2", 00:09:34.024 "uuid": "42b38c57-3cdf-5ba8-97e1-b51e0cd2c666", 00:09:34.024 "is_configured": true, 00:09:34.024 "data_offset": 2048, 00:09:34.024 "data_size": 63488 00:09:34.024 } 00:09:34.024 ] 00:09:34.024 }' 00:09:34.024 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.024 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.593 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:34.593 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.593 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.593 [2024-11-05 11:25:33.599781] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.593 [2024-11-05 11:25:33.599820] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.593 [2024-11-05 11:25:33.602448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.593 [2024-11-05 11:25:33.602490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.593 [2024-11-05 11:25:33.602521] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.593 [2024-11-05 11:25:33.602534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:34.593 { 00:09:34.593 "results": [ 00:09:34.593 { 00:09:34.593 "job": "raid_bdev1", 00:09:34.593 "core_mask": "0x1", 00:09:34.593 "workload": "randrw", 00:09:34.593 "percentage": 50, 00:09:34.593 "status": "finished", 00:09:34.593 "queue_depth": 1, 00:09:34.593 "io_size": 131072, 00:09:34.593 "runtime": 1.427141, 00:09:34.593 "iops": 16385.90720888826, 00:09:34.593 "mibps": 2048.2384011110325, 00:09:34.593 "io_failed": 1, 00:09:34.593 "io_timeout": 0, 00:09:34.593 "avg_latency_us": 84.5976738966358, 00:09:34.593 "min_latency_us": 25.6, 00:09:34.593 "max_latency_us": 1359.3711790393013 00:09:34.593 } 00:09:34.593 ], 00:09:34.593 "core_count": 1 00:09:34.593 } 00:09:34.593 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.593 11:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62697 00:09:34.593 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62697 ']' 00:09:34.593 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62697 00:09:34.593 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:34.593 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:34.593 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62697 00:09:34.593 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:34.593 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:34.593 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62697' 00:09:34.593 killing process with pid 62697 00:09:34.593 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62697 00:09:34.593 [2024-11-05 11:25:33.652207] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:34.593 11:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62697 00:09:34.593 [2024-11-05 11:25:33.788089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:35.973 11:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GWvLx38VEM 00:09:35.973 11:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:35.973 11:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:35.973 11:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:35.973 11:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:35.973 11:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.973 11:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:35.973 11:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:35.973 ************************************ 00:09:35.973 END TEST raid_write_error_test 00:09:35.973 ************************************ 00:09:35.973 00:09:35.973 real 0m4.452s 00:09:35.973 user 0m5.325s 00:09:35.973 sys 0m0.617s 00:09:35.973 11:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:35.973 11:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.973 11:25:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:35.973 11:25:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:09:35.973 11:25:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:35.973 11:25:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:35.973 11:25:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:35.973 ************************************ 00:09:35.973 START TEST raid_state_function_test 00:09:35.973 ************************************ 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:35.973 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62835 00:09:35.974 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:35.974 Process raid pid: 62835 00:09:35.974 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62835' 00:09:35.974 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62835 00:09:35.974 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62835 ']' 00:09:35.974 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.974 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:35.974 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.974 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:35.974 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.974 [2024-11-05 11:25:35.168544] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:09:35.974 [2024-11-05 11:25:35.168693] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.232 [2024-11-05 11:25:35.351506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.232 [2024-11-05 11:25:35.470857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.489 [2024-11-05 11:25:35.674916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.489 [2024-11-05 11:25:35.674962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.749 [2024-11-05 11:25:36.012937] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.749 [2024-11-05 11:25:36.013076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.749 [2024-11-05 11:25:36.013092] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.749 [2024-11-05 11:25:36.013103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.749 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.017 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.017 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.017 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.017 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.017 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.017 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.017 "name": "Existed_Raid", 00:09:37.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.017 "strip_size_kb": 0, 00:09:37.017 "state": "configuring", 00:09:37.017 "raid_level": "raid1", 00:09:37.017 "superblock": false, 00:09:37.017 "num_base_bdevs": 2, 00:09:37.017 "num_base_bdevs_discovered": 0, 00:09:37.017 "num_base_bdevs_operational": 2, 00:09:37.017 "base_bdevs_list": [ 00:09:37.017 { 00:09:37.017 "name": "BaseBdev1", 00:09:37.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.017 "is_configured": false, 00:09:37.017 "data_offset": 0, 00:09:37.017 "data_size": 0 00:09:37.017 }, 00:09:37.017 { 00:09:37.017 "name": "BaseBdev2", 00:09:37.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.017 "is_configured": false, 00:09:37.017 "data_offset": 0, 00:09:37.017 "data_size": 0 00:09:37.017 } 00:09:37.017 ] 00:09:37.017 }' 00:09:37.017 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.017 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.286 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.287 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.287 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.287 [2024-11-05 11:25:36.516053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.287 [2024-11-05 11:25:36.516207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:37.287 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.287 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:37.287 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.287 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.287 [2024-11-05 11:25:36.528013] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.287 [2024-11-05 11:25:36.528116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.287 [2024-11-05 11:25:36.528163] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.287 [2024-11-05 11:25:36.528192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.287 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.287 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.287 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.287 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.547 [2024-11-05 11:25:36.579692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.547 BaseBdev1 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.547 [ 00:09:37.547 { 00:09:37.547 "name": "BaseBdev1", 00:09:37.547 "aliases": [ 00:09:37.547 "ff615923-852f-4699-a8e6-44452371fe07" 00:09:37.547 ], 00:09:37.547 "product_name": "Malloc disk", 00:09:37.547 "block_size": 512, 00:09:37.547 "num_blocks": 65536, 00:09:37.547 "uuid": "ff615923-852f-4699-a8e6-44452371fe07", 00:09:37.547 "assigned_rate_limits": { 00:09:37.547 "rw_ios_per_sec": 0, 00:09:37.547 "rw_mbytes_per_sec": 0, 00:09:37.547 "r_mbytes_per_sec": 0, 00:09:37.547 "w_mbytes_per_sec": 0 00:09:37.547 }, 00:09:37.547 "claimed": true, 00:09:37.547 "claim_type": "exclusive_write", 00:09:37.547 "zoned": false, 00:09:37.547 "supported_io_types": { 00:09:37.547 "read": true, 00:09:37.547 "write": true, 00:09:37.547 "unmap": true, 00:09:37.547 "flush": true, 00:09:37.547 "reset": true, 00:09:37.547 "nvme_admin": false, 00:09:37.547 "nvme_io": false, 00:09:37.547 "nvme_io_md": false, 00:09:37.547 "write_zeroes": true, 00:09:37.547 "zcopy": true, 00:09:37.547 "get_zone_info": false, 00:09:37.547 "zone_management": false, 00:09:37.547 "zone_append": false, 00:09:37.547 "compare": false, 00:09:37.547 "compare_and_write": false, 00:09:37.547 "abort": true, 00:09:37.547 "seek_hole": false, 00:09:37.547 "seek_data": false, 00:09:37.547 "copy": true, 00:09:37.547 "nvme_iov_md": false 00:09:37.547 }, 00:09:37.547 "memory_domains": [ 00:09:37.547 { 00:09:37.547 "dma_device_id": "system", 00:09:37.547 "dma_device_type": 1 00:09:37.547 }, 00:09:37.547 { 00:09:37.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.547 "dma_device_type": 2 00:09:37.547 } 00:09:37.547 ], 00:09:37.547 "driver_specific": {} 00:09:37.547 } 00:09:37.547 ] 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.547 "name": "Existed_Raid", 00:09:37.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.547 "strip_size_kb": 0, 00:09:37.547 "state": "configuring", 00:09:37.547 "raid_level": "raid1", 00:09:37.547 "superblock": false, 00:09:37.547 "num_base_bdevs": 2, 00:09:37.547 "num_base_bdevs_discovered": 1, 00:09:37.547 "num_base_bdevs_operational": 2, 00:09:37.547 "base_bdevs_list": [ 00:09:37.547 { 00:09:37.547 "name": "BaseBdev1", 00:09:37.547 "uuid": "ff615923-852f-4699-a8e6-44452371fe07", 00:09:37.547 "is_configured": true, 00:09:37.547 "data_offset": 0, 00:09:37.547 "data_size": 65536 00:09:37.547 }, 00:09:37.547 { 00:09:37.547 "name": "BaseBdev2", 00:09:37.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.547 "is_configured": false, 00:09:37.547 "data_offset": 0, 00:09:37.547 "data_size": 0 00:09:37.547 } 00:09:37.547 ] 00:09:37.547 }' 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.547 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.116 [2024-11-05 11:25:37.098971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.116 [2024-11-05 11:25:37.099125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.116 [2024-11-05 11:25:37.110993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.116 [2024-11-05 11:25:37.112971] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.116 [2024-11-05 11:25:37.113060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.116 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.116 "name": "Existed_Raid", 00:09:38.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.116 "strip_size_kb": 0, 00:09:38.116 "state": "configuring", 00:09:38.116 "raid_level": "raid1", 00:09:38.116 "superblock": false, 00:09:38.116 "num_base_bdevs": 2, 00:09:38.116 "num_base_bdevs_discovered": 1, 00:09:38.116 "num_base_bdevs_operational": 2, 00:09:38.116 "base_bdevs_list": [ 00:09:38.116 { 00:09:38.116 "name": "BaseBdev1", 00:09:38.116 "uuid": "ff615923-852f-4699-a8e6-44452371fe07", 00:09:38.116 "is_configured": true, 00:09:38.117 "data_offset": 0, 00:09:38.117 "data_size": 65536 00:09:38.117 }, 00:09:38.117 { 00:09:38.117 "name": "BaseBdev2", 00:09:38.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.117 "is_configured": false, 00:09:38.117 "data_offset": 0, 00:09:38.117 "data_size": 0 00:09:38.117 } 00:09:38.117 ] 00:09:38.117 }' 00:09:38.117 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.117 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.376 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:38.376 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.376 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.376 [2024-11-05 11:25:37.605069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.376 [2024-11-05 11:25:37.605217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:38.376 [2024-11-05 11:25:37.605232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:38.377 [2024-11-05 11:25:37.605496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:38.377 [2024-11-05 11:25:37.605664] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:38.377 [2024-11-05 11:25:37.605678] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:38.377 [2024-11-05 11:25:37.605962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.377 BaseBdev2 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.377 [ 00:09:38.377 { 00:09:38.377 "name": "BaseBdev2", 00:09:38.377 "aliases": [ 00:09:38.377 "dfe64b92-9d94-4a86-98cf-0b48a2e392a3" 00:09:38.377 ], 00:09:38.377 "product_name": "Malloc disk", 00:09:38.377 "block_size": 512, 00:09:38.377 "num_blocks": 65536, 00:09:38.377 "uuid": "dfe64b92-9d94-4a86-98cf-0b48a2e392a3", 00:09:38.377 "assigned_rate_limits": { 00:09:38.377 "rw_ios_per_sec": 0, 00:09:38.377 "rw_mbytes_per_sec": 0, 00:09:38.377 "r_mbytes_per_sec": 0, 00:09:38.377 "w_mbytes_per_sec": 0 00:09:38.377 }, 00:09:38.377 "claimed": true, 00:09:38.377 "claim_type": "exclusive_write", 00:09:38.377 "zoned": false, 00:09:38.377 "supported_io_types": { 00:09:38.377 "read": true, 00:09:38.377 "write": true, 00:09:38.377 "unmap": true, 00:09:38.377 "flush": true, 00:09:38.377 "reset": true, 00:09:38.377 "nvme_admin": false, 00:09:38.377 "nvme_io": false, 00:09:38.377 "nvme_io_md": false, 00:09:38.377 "write_zeroes": true, 00:09:38.377 "zcopy": true, 00:09:38.377 "get_zone_info": false, 00:09:38.377 "zone_management": false, 00:09:38.377 "zone_append": false, 00:09:38.377 "compare": false, 00:09:38.377 "compare_and_write": false, 00:09:38.377 "abort": true, 00:09:38.377 "seek_hole": false, 00:09:38.377 "seek_data": false, 00:09:38.377 "copy": true, 00:09:38.377 "nvme_iov_md": false 00:09:38.377 }, 00:09:38.377 "memory_domains": [ 00:09:38.377 { 00:09:38.377 "dma_device_id": "system", 00:09:38.377 "dma_device_type": 1 00:09:38.377 }, 00:09:38.377 { 00:09:38.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.377 "dma_device_type": 2 00:09:38.377 } 00:09:38.377 ], 00:09:38.377 "driver_specific": {} 00:09:38.377 } 00:09:38.377 ] 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.377 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.636 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.636 "name": "Existed_Raid", 00:09:38.636 "uuid": "e85c6e54-0c75-4e95-8a73-fc6b211f974f", 00:09:38.636 "strip_size_kb": 0, 00:09:38.636 "state": "online", 00:09:38.636 "raid_level": "raid1", 00:09:38.636 "superblock": false, 00:09:38.636 "num_base_bdevs": 2, 00:09:38.636 "num_base_bdevs_discovered": 2, 00:09:38.636 "num_base_bdevs_operational": 2, 00:09:38.636 "base_bdevs_list": [ 00:09:38.636 { 00:09:38.636 "name": "BaseBdev1", 00:09:38.636 "uuid": "ff615923-852f-4699-a8e6-44452371fe07", 00:09:38.636 "is_configured": true, 00:09:38.636 "data_offset": 0, 00:09:38.636 "data_size": 65536 00:09:38.636 }, 00:09:38.636 { 00:09:38.636 "name": "BaseBdev2", 00:09:38.636 "uuid": "dfe64b92-9d94-4a86-98cf-0b48a2e392a3", 00:09:38.636 "is_configured": true, 00:09:38.636 "data_offset": 0, 00:09:38.636 "data_size": 65536 00:09:38.636 } 00:09:38.636 ] 00:09:38.636 }' 00:09:38.636 11:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.636 11:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.896 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:38.896 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:38.896 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.896 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.896 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.896 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.896 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:38.896 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.896 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.896 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.896 [2024-11-05 11:25:38.068622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.896 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.896 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.896 "name": "Existed_Raid", 00:09:38.896 "aliases": [ 00:09:38.896 "e85c6e54-0c75-4e95-8a73-fc6b211f974f" 00:09:38.896 ], 00:09:38.896 "product_name": "Raid Volume", 00:09:38.896 "block_size": 512, 00:09:38.896 "num_blocks": 65536, 00:09:38.896 "uuid": "e85c6e54-0c75-4e95-8a73-fc6b211f974f", 00:09:38.896 "assigned_rate_limits": { 00:09:38.896 "rw_ios_per_sec": 0, 00:09:38.896 "rw_mbytes_per_sec": 0, 00:09:38.896 "r_mbytes_per_sec": 0, 00:09:38.896 "w_mbytes_per_sec": 0 00:09:38.896 }, 00:09:38.896 "claimed": false, 00:09:38.896 "zoned": false, 00:09:38.896 "supported_io_types": { 00:09:38.896 "read": true, 00:09:38.896 "write": true, 00:09:38.896 "unmap": false, 00:09:38.896 "flush": false, 00:09:38.896 "reset": true, 00:09:38.896 "nvme_admin": false, 00:09:38.896 "nvme_io": false, 00:09:38.896 "nvme_io_md": false, 00:09:38.896 "write_zeroes": true, 00:09:38.896 "zcopy": false, 00:09:38.896 "get_zone_info": false, 00:09:38.896 "zone_management": false, 00:09:38.896 "zone_append": false, 00:09:38.896 "compare": false, 00:09:38.896 "compare_and_write": false, 00:09:38.896 "abort": false, 00:09:38.896 "seek_hole": false, 00:09:38.896 "seek_data": false, 00:09:38.896 "copy": false, 00:09:38.896 "nvme_iov_md": false 00:09:38.896 }, 00:09:38.896 "memory_domains": [ 00:09:38.896 { 00:09:38.896 "dma_device_id": "system", 00:09:38.896 "dma_device_type": 1 00:09:38.896 }, 00:09:38.896 { 00:09:38.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.896 "dma_device_type": 2 00:09:38.896 }, 00:09:38.896 { 00:09:38.896 "dma_device_id": "system", 00:09:38.896 "dma_device_type": 1 00:09:38.896 }, 00:09:38.896 { 00:09:38.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.896 "dma_device_type": 2 00:09:38.896 } 00:09:38.896 ], 00:09:38.896 "driver_specific": { 00:09:38.896 "raid": { 00:09:38.896 "uuid": "e85c6e54-0c75-4e95-8a73-fc6b211f974f", 00:09:38.896 "strip_size_kb": 0, 00:09:38.896 "state": "online", 00:09:38.896 "raid_level": "raid1", 00:09:38.896 "superblock": false, 00:09:38.896 "num_base_bdevs": 2, 00:09:38.896 "num_base_bdevs_discovered": 2, 00:09:38.896 "num_base_bdevs_operational": 2, 00:09:38.896 "base_bdevs_list": [ 00:09:38.896 { 00:09:38.896 "name": "BaseBdev1", 00:09:38.896 "uuid": "ff615923-852f-4699-a8e6-44452371fe07", 00:09:38.896 "is_configured": true, 00:09:38.896 "data_offset": 0, 00:09:38.896 "data_size": 65536 00:09:38.896 }, 00:09:38.896 { 00:09:38.896 "name": "BaseBdev2", 00:09:38.896 "uuid": "dfe64b92-9d94-4a86-98cf-0b48a2e392a3", 00:09:38.896 "is_configured": true, 00:09:38.896 "data_offset": 0, 00:09:38.896 "data_size": 65536 00:09:38.896 } 00:09:38.896 ] 00:09:38.896 } 00:09:38.896 } 00:09:38.896 }' 00:09:38.896 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.896 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:38.896 BaseBdev2' 00:09:38.896 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.156 [2024-11-05 11:25:38.288029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.156 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:39.157 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.157 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.157 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.157 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.157 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.157 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.157 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.157 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.157 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.416 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.416 "name": "Existed_Raid", 00:09:39.416 "uuid": "e85c6e54-0c75-4e95-8a73-fc6b211f974f", 00:09:39.416 "strip_size_kb": 0, 00:09:39.416 "state": "online", 00:09:39.416 "raid_level": "raid1", 00:09:39.416 "superblock": false, 00:09:39.416 "num_base_bdevs": 2, 00:09:39.416 "num_base_bdevs_discovered": 1, 00:09:39.416 "num_base_bdevs_operational": 1, 00:09:39.416 "base_bdevs_list": [ 00:09:39.416 { 00:09:39.416 "name": null, 00:09:39.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.416 "is_configured": false, 00:09:39.416 "data_offset": 0, 00:09:39.416 "data_size": 65536 00:09:39.416 }, 00:09:39.416 { 00:09:39.416 "name": "BaseBdev2", 00:09:39.416 "uuid": "dfe64b92-9d94-4a86-98cf-0b48a2e392a3", 00:09:39.416 "is_configured": true, 00:09:39.416 "data_offset": 0, 00:09:39.416 "data_size": 65536 00:09:39.416 } 00:09:39.416 ] 00:09:39.416 }' 00:09:39.416 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.416 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.676 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:39.676 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.676 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.676 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.676 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.676 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.676 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.676 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.676 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.676 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:39.676 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.676 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.676 [2024-11-05 11:25:38.881933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:39.676 [2024-11-05 11:25:38.882034] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.937 [2024-11-05 11:25:38.980873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.937 [2024-11-05 11:25:38.981009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.937 [2024-11-05 11:25:38.981053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:39.937 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.937 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:39.937 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.937 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.937 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.937 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.937 11:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:39.937 11:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.937 11:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:39.937 11:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:39.937 11:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:39.937 11:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62835 00:09:39.937 11:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62835 ']' 00:09:39.937 11:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62835 00:09:39.937 11:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:39.937 11:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:39.937 11:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62835 00:09:39.937 11:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:39.937 11:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:39.937 11:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62835' 00:09:39.937 killing process with pid 62835 00:09:39.937 11:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62835 00:09:39.937 11:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62835 00:09:39.937 [2024-11-05 11:25:39.075784] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:39.937 [2024-11-05 11:25:39.092890] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.316 ************************************ 00:09:41.316 END TEST raid_state_function_test 00:09:41.316 ************************************ 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:41.316 00:09:41.316 real 0m5.169s 00:09:41.316 user 0m7.460s 00:09:41.316 sys 0m0.870s 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.316 11:25:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:41.316 11:25:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:41.316 11:25:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:41.316 11:25:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.316 ************************************ 00:09:41.316 START TEST raid_state_function_test_sb 00:09:41.316 ************************************ 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:41.316 Process raid pid: 63088 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63088 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63088' 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63088 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 63088 ']' 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:41.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.316 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:41.316 [2024-11-05 11:25:40.388057] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:09:41.316 [2024-11-05 11:25:40.388222] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.316 [2024-11-05 11:25:40.565444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.575 [2024-11-05 11:25:40.684042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.834 [2024-11-05 11:25:40.892207] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.834 [2024-11-05 11:25:40.892257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.093 [2024-11-05 11:25:41.236672] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:42.093 [2024-11-05 11:25:41.236770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:42.093 [2024-11-05 11:25:41.236785] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.093 [2024-11-05 11:25:41.236795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.093 "name": "Existed_Raid", 00:09:42.093 "uuid": "f489b5a9-8d43-414e-9748-0199b39bab35", 00:09:42.093 "strip_size_kb": 0, 00:09:42.093 "state": "configuring", 00:09:42.093 "raid_level": "raid1", 00:09:42.093 "superblock": true, 00:09:42.093 "num_base_bdevs": 2, 00:09:42.093 "num_base_bdevs_discovered": 0, 00:09:42.093 "num_base_bdevs_operational": 2, 00:09:42.093 "base_bdevs_list": [ 00:09:42.093 { 00:09:42.093 "name": "BaseBdev1", 00:09:42.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.093 "is_configured": false, 00:09:42.093 "data_offset": 0, 00:09:42.093 "data_size": 0 00:09:42.093 }, 00:09:42.093 { 00:09:42.093 "name": "BaseBdev2", 00:09:42.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.093 "is_configured": false, 00:09:42.093 "data_offset": 0, 00:09:42.093 "data_size": 0 00:09:42.093 } 00:09:42.093 ] 00:09:42.093 }' 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.093 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.662 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:42.662 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.662 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.663 [2024-11-05 11:25:41.699872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.663 [2024-11-05 11:25:41.699968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.663 [2024-11-05 11:25:41.707859] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:42.663 [2024-11-05 11:25:41.707949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:42.663 [2024-11-05 11:25:41.707981] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.663 [2024-11-05 11:25:41.708013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.663 [2024-11-05 11:25:41.750835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.663 BaseBdev1 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.663 [ 00:09:42.663 { 00:09:42.663 "name": "BaseBdev1", 00:09:42.663 "aliases": [ 00:09:42.663 "8edd7ab0-79c4-4a96-a8f4-bccda8f64818" 00:09:42.663 ], 00:09:42.663 "product_name": "Malloc disk", 00:09:42.663 "block_size": 512, 00:09:42.663 "num_blocks": 65536, 00:09:42.663 "uuid": "8edd7ab0-79c4-4a96-a8f4-bccda8f64818", 00:09:42.663 "assigned_rate_limits": { 00:09:42.663 "rw_ios_per_sec": 0, 00:09:42.663 "rw_mbytes_per_sec": 0, 00:09:42.663 "r_mbytes_per_sec": 0, 00:09:42.663 "w_mbytes_per_sec": 0 00:09:42.663 }, 00:09:42.663 "claimed": true, 00:09:42.663 "claim_type": "exclusive_write", 00:09:42.663 "zoned": false, 00:09:42.663 "supported_io_types": { 00:09:42.663 "read": true, 00:09:42.663 "write": true, 00:09:42.663 "unmap": true, 00:09:42.663 "flush": true, 00:09:42.663 "reset": true, 00:09:42.663 "nvme_admin": false, 00:09:42.663 "nvme_io": false, 00:09:42.663 "nvme_io_md": false, 00:09:42.663 "write_zeroes": true, 00:09:42.663 "zcopy": true, 00:09:42.663 "get_zone_info": false, 00:09:42.663 "zone_management": false, 00:09:42.663 "zone_append": false, 00:09:42.663 "compare": false, 00:09:42.663 "compare_and_write": false, 00:09:42.663 "abort": true, 00:09:42.663 "seek_hole": false, 00:09:42.663 "seek_data": false, 00:09:42.663 "copy": true, 00:09:42.663 "nvme_iov_md": false 00:09:42.663 }, 00:09:42.663 "memory_domains": [ 00:09:42.663 { 00:09:42.663 "dma_device_id": "system", 00:09:42.663 "dma_device_type": 1 00:09:42.663 }, 00:09:42.663 { 00:09:42.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.663 "dma_device_type": 2 00:09:42.663 } 00:09:42.663 ], 00:09:42.663 "driver_specific": {} 00:09:42.663 } 00:09:42.663 ] 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.663 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.664 "name": "Existed_Raid", 00:09:42.664 "uuid": "220d546e-e26f-4d57-a5c9-ad3048a48862", 00:09:42.664 "strip_size_kb": 0, 00:09:42.664 "state": "configuring", 00:09:42.664 "raid_level": "raid1", 00:09:42.664 "superblock": true, 00:09:42.664 "num_base_bdevs": 2, 00:09:42.664 "num_base_bdevs_discovered": 1, 00:09:42.664 "num_base_bdevs_operational": 2, 00:09:42.664 "base_bdevs_list": [ 00:09:42.664 { 00:09:42.664 "name": "BaseBdev1", 00:09:42.664 "uuid": "8edd7ab0-79c4-4a96-a8f4-bccda8f64818", 00:09:42.664 "is_configured": true, 00:09:42.664 "data_offset": 2048, 00:09:42.664 "data_size": 63488 00:09:42.664 }, 00:09:42.664 { 00:09:42.664 "name": "BaseBdev2", 00:09:42.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.664 "is_configured": false, 00:09:42.664 "data_offset": 0, 00:09:42.664 "data_size": 0 00:09:42.664 } 00:09:42.664 ] 00:09:42.664 }' 00:09:42.664 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.664 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.233 [2024-11-05 11:25:42.218102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:43.233 [2024-11-05 11:25:42.218224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.233 [2024-11-05 11:25:42.226117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.233 [2024-11-05 11:25:42.227974] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.233 [2024-11-05 11:25:42.228055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.233 "name": "Existed_Raid", 00:09:43.233 "uuid": "31e3583a-31b6-4732-a1a6-ac912cddbac9", 00:09:43.233 "strip_size_kb": 0, 00:09:43.233 "state": "configuring", 00:09:43.233 "raid_level": "raid1", 00:09:43.233 "superblock": true, 00:09:43.233 "num_base_bdevs": 2, 00:09:43.233 "num_base_bdevs_discovered": 1, 00:09:43.233 "num_base_bdevs_operational": 2, 00:09:43.233 "base_bdevs_list": [ 00:09:43.233 { 00:09:43.233 "name": "BaseBdev1", 00:09:43.233 "uuid": "8edd7ab0-79c4-4a96-a8f4-bccda8f64818", 00:09:43.233 "is_configured": true, 00:09:43.233 "data_offset": 2048, 00:09:43.233 "data_size": 63488 00:09:43.233 }, 00:09:43.233 { 00:09:43.233 "name": "BaseBdev2", 00:09:43.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.233 "is_configured": false, 00:09:43.233 "data_offset": 0, 00:09:43.233 "data_size": 0 00:09:43.233 } 00:09:43.233 ] 00:09:43.233 }' 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.233 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.492 [2024-11-05 11:25:42.678409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.492 [2024-11-05 11:25:42.678750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:43.492 [2024-11-05 11:25:42.678801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:43.492 [2024-11-05 11:25:42.679102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:43.492 BaseBdev2 00:09:43.492 [2024-11-05 11:25:42.679322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:43.492 [2024-11-05 11:25:42.679342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:43.492 [2024-11-05 11:25:42.679489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.492 [ 00:09:43.492 { 00:09:43.492 "name": "BaseBdev2", 00:09:43.492 "aliases": [ 00:09:43.492 "eda44295-4c87-4780-aa35-9984c6da7260" 00:09:43.492 ], 00:09:43.492 "product_name": "Malloc disk", 00:09:43.492 "block_size": 512, 00:09:43.492 "num_blocks": 65536, 00:09:43.492 "uuid": "eda44295-4c87-4780-aa35-9984c6da7260", 00:09:43.492 "assigned_rate_limits": { 00:09:43.492 "rw_ios_per_sec": 0, 00:09:43.492 "rw_mbytes_per_sec": 0, 00:09:43.492 "r_mbytes_per_sec": 0, 00:09:43.492 "w_mbytes_per_sec": 0 00:09:43.492 }, 00:09:43.492 "claimed": true, 00:09:43.492 "claim_type": "exclusive_write", 00:09:43.492 "zoned": false, 00:09:43.492 "supported_io_types": { 00:09:43.492 "read": true, 00:09:43.492 "write": true, 00:09:43.492 "unmap": true, 00:09:43.492 "flush": true, 00:09:43.492 "reset": true, 00:09:43.492 "nvme_admin": false, 00:09:43.492 "nvme_io": false, 00:09:43.492 "nvme_io_md": false, 00:09:43.492 "write_zeroes": true, 00:09:43.492 "zcopy": true, 00:09:43.492 "get_zone_info": false, 00:09:43.492 "zone_management": false, 00:09:43.492 "zone_append": false, 00:09:43.492 "compare": false, 00:09:43.492 "compare_and_write": false, 00:09:43.492 "abort": true, 00:09:43.492 "seek_hole": false, 00:09:43.492 "seek_data": false, 00:09:43.492 "copy": true, 00:09:43.492 "nvme_iov_md": false 00:09:43.492 }, 00:09:43.492 "memory_domains": [ 00:09:43.492 { 00:09:43.492 "dma_device_id": "system", 00:09:43.492 "dma_device_type": 1 00:09:43.492 }, 00:09:43.492 { 00:09:43.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.492 "dma_device_type": 2 00:09:43.492 } 00:09:43.492 ], 00:09:43.492 "driver_specific": {} 00:09:43.492 } 00:09:43.492 ] 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.492 "name": "Existed_Raid", 00:09:43.492 "uuid": "31e3583a-31b6-4732-a1a6-ac912cddbac9", 00:09:43.492 "strip_size_kb": 0, 00:09:43.492 "state": "online", 00:09:43.492 "raid_level": "raid1", 00:09:43.492 "superblock": true, 00:09:43.492 "num_base_bdevs": 2, 00:09:43.492 "num_base_bdevs_discovered": 2, 00:09:43.492 "num_base_bdevs_operational": 2, 00:09:43.492 "base_bdevs_list": [ 00:09:43.492 { 00:09:43.492 "name": "BaseBdev1", 00:09:43.492 "uuid": "8edd7ab0-79c4-4a96-a8f4-bccda8f64818", 00:09:43.492 "is_configured": true, 00:09:43.492 "data_offset": 2048, 00:09:43.492 "data_size": 63488 00:09:43.492 }, 00:09:43.492 { 00:09:43.492 "name": "BaseBdev2", 00:09:43.492 "uuid": "eda44295-4c87-4780-aa35-9984c6da7260", 00:09:43.492 "is_configured": true, 00:09:43.492 "data_offset": 2048, 00:09:43.492 "data_size": 63488 00:09:43.492 } 00:09:43.492 ] 00:09:43.492 }' 00:09:43.492 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.493 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.062 [2024-11-05 11:25:43.161924] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.062 "name": "Existed_Raid", 00:09:44.062 "aliases": [ 00:09:44.062 "31e3583a-31b6-4732-a1a6-ac912cddbac9" 00:09:44.062 ], 00:09:44.062 "product_name": "Raid Volume", 00:09:44.062 "block_size": 512, 00:09:44.062 "num_blocks": 63488, 00:09:44.062 "uuid": "31e3583a-31b6-4732-a1a6-ac912cddbac9", 00:09:44.062 "assigned_rate_limits": { 00:09:44.062 "rw_ios_per_sec": 0, 00:09:44.062 "rw_mbytes_per_sec": 0, 00:09:44.062 "r_mbytes_per_sec": 0, 00:09:44.062 "w_mbytes_per_sec": 0 00:09:44.062 }, 00:09:44.062 "claimed": false, 00:09:44.062 "zoned": false, 00:09:44.062 "supported_io_types": { 00:09:44.062 "read": true, 00:09:44.062 "write": true, 00:09:44.062 "unmap": false, 00:09:44.062 "flush": false, 00:09:44.062 "reset": true, 00:09:44.062 "nvme_admin": false, 00:09:44.062 "nvme_io": false, 00:09:44.062 "nvme_io_md": false, 00:09:44.062 "write_zeroes": true, 00:09:44.062 "zcopy": false, 00:09:44.062 "get_zone_info": false, 00:09:44.062 "zone_management": false, 00:09:44.062 "zone_append": false, 00:09:44.062 "compare": false, 00:09:44.062 "compare_and_write": false, 00:09:44.062 "abort": false, 00:09:44.062 "seek_hole": false, 00:09:44.062 "seek_data": false, 00:09:44.062 "copy": false, 00:09:44.062 "nvme_iov_md": false 00:09:44.062 }, 00:09:44.062 "memory_domains": [ 00:09:44.062 { 00:09:44.062 "dma_device_id": "system", 00:09:44.062 "dma_device_type": 1 00:09:44.062 }, 00:09:44.062 { 00:09:44.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.062 "dma_device_type": 2 00:09:44.062 }, 00:09:44.062 { 00:09:44.062 "dma_device_id": "system", 00:09:44.062 "dma_device_type": 1 00:09:44.062 }, 00:09:44.062 { 00:09:44.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.062 "dma_device_type": 2 00:09:44.062 } 00:09:44.062 ], 00:09:44.062 "driver_specific": { 00:09:44.062 "raid": { 00:09:44.062 "uuid": "31e3583a-31b6-4732-a1a6-ac912cddbac9", 00:09:44.062 "strip_size_kb": 0, 00:09:44.062 "state": "online", 00:09:44.062 "raid_level": "raid1", 00:09:44.062 "superblock": true, 00:09:44.062 "num_base_bdevs": 2, 00:09:44.062 "num_base_bdevs_discovered": 2, 00:09:44.062 "num_base_bdevs_operational": 2, 00:09:44.062 "base_bdevs_list": [ 00:09:44.062 { 00:09:44.062 "name": "BaseBdev1", 00:09:44.062 "uuid": "8edd7ab0-79c4-4a96-a8f4-bccda8f64818", 00:09:44.062 "is_configured": true, 00:09:44.062 "data_offset": 2048, 00:09:44.062 "data_size": 63488 00:09:44.062 }, 00:09:44.062 { 00:09:44.062 "name": "BaseBdev2", 00:09:44.062 "uuid": "eda44295-4c87-4780-aa35-9984c6da7260", 00:09:44.062 "is_configured": true, 00:09:44.062 "data_offset": 2048, 00:09:44.062 "data_size": 63488 00:09:44.062 } 00:09:44.062 ] 00:09:44.062 } 00:09:44.062 } 00:09:44.062 }' 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:44.062 BaseBdev2' 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.062 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.322 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.322 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.322 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.322 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:44.322 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.322 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.322 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.322 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.322 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.322 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.323 [2024-11-05 11:25:43.409341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.323 "name": "Existed_Raid", 00:09:44.323 "uuid": "31e3583a-31b6-4732-a1a6-ac912cddbac9", 00:09:44.323 "strip_size_kb": 0, 00:09:44.323 "state": "online", 00:09:44.323 "raid_level": "raid1", 00:09:44.323 "superblock": true, 00:09:44.323 "num_base_bdevs": 2, 00:09:44.323 "num_base_bdevs_discovered": 1, 00:09:44.323 "num_base_bdevs_operational": 1, 00:09:44.323 "base_bdevs_list": [ 00:09:44.323 { 00:09:44.323 "name": null, 00:09:44.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.323 "is_configured": false, 00:09:44.323 "data_offset": 0, 00:09:44.323 "data_size": 63488 00:09:44.323 }, 00:09:44.323 { 00:09:44.323 "name": "BaseBdev2", 00:09:44.323 "uuid": "eda44295-4c87-4780-aa35-9984c6da7260", 00:09:44.323 "is_configured": true, 00:09:44.323 "data_offset": 2048, 00:09:44.323 "data_size": 63488 00:09:44.323 } 00:09:44.323 ] 00:09:44.323 }' 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.323 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.892 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:44.892 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.892 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.892 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.892 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.892 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:44.892 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.892 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:44.892 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.892 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:44.892 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.892 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.892 [2024-11-05 11:25:43.987502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:44.892 [2024-11-05 11:25:43.987612] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.892 [2024-11-05 11:25:44.082563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.892 [2024-11-05 11:25:44.082709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.892 [2024-11-05 11:25:44.082752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63088 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 63088 ']' 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 63088 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:44.892 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63088 00:09:45.152 killing process with pid 63088 00:09:45.152 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:45.152 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:45.152 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63088' 00:09:45.152 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 63088 00:09:45.152 [2024-11-05 11:25:44.180781] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.152 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 63088 00:09:45.152 [2024-11-05 11:25:44.198147] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.096 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:46.096 00:09:46.096 real 0m5.039s 00:09:46.096 user 0m7.248s 00:09:46.096 sys 0m0.854s 00:09:46.096 ************************************ 00:09:46.096 END TEST raid_state_function_test_sb 00:09:46.096 ************************************ 00:09:46.096 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:46.096 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.356 11:25:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:46.356 11:25:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:46.356 11:25:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:46.356 11:25:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.356 ************************************ 00:09:46.356 START TEST raid_superblock_test 00:09:46.356 ************************************ 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63335 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63335 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63335 ']' 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:46.356 11:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.356 [2024-11-05 11:25:45.492405] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:09:46.356 [2024-11-05 11:25:45.492596] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63335 ] 00:09:46.615 [2024-11-05 11:25:45.665245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.615 [2024-11-05 11:25:45.782423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.874 [2024-11-05 11:25:45.981069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.874 [2024-11-05 11:25:45.981238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.132 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:47.132 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:47.132 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:47.132 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.132 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:47.132 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:47.132 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:47.132 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.132 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.132 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.132 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:47.132 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.133 malloc1 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.133 [2024-11-05 11:25:46.395987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:47.133 [2024-11-05 11:25:46.396148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.133 [2024-11-05 11:25:46.396195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:47.133 [2024-11-05 11:25:46.396241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.133 [2024-11-05 11:25:46.398379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.133 [2024-11-05 11:25:46.398460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:47.133 pt1 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.133 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.392 malloc2 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.392 [2024-11-05 11:25:46.455102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.392 [2024-11-05 11:25:46.455236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.392 [2024-11-05 11:25:46.455283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:47.392 [2024-11-05 11:25:46.455314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.392 [2024-11-05 11:25:46.457513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.392 [2024-11-05 11:25:46.457589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.392 pt2 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.392 [2024-11-05 11:25:46.467104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:47.392 [2024-11-05 11:25:46.468929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.392 [2024-11-05 11:25:46.469175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:47.392 [2024-11-05 11:25:46.469199] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:47.392 [2024-11-05 11:25:46.469473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:47.392 [2024-11-05 11:25:46.469654] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:47.392 [2024-11-05 11:25:46.469670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:47.392 [2024-11-05 11:25:46.469843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.392 "name": "raid_bdev1", 00:09:47.392 "uuid": "6f24c917-c351-4160-b8c4-1801e97598e2", 00:09:47.392 "strip_size_kb": 0, 00:09:47.392 "state": "online", 00:09:47.392 "raid_level": "raid1", 00:09:47.392 "superblock": true, 00:09:47.392 "num_base_bdevs": 2, 00:09:47.392 "num_base_bdevs_discovered": 2, 00:09:47.392 "num_base_bdevs_operational": 2, 00:09:47.392 "base_bdevs_list": [ 00:09:47.392 { 00:09:47.392 "name": "pt1", 00:09:47.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.392 "is_configured": true, 00:09:47.392 "data_offset": 2048, 00:09:47.392 "data_size": 63488 00:09:47.392 }, 00:09:47.392 { 00:09:47.392 "name": "pt2", 00:09:47.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.392 "is_configured": true, 00:09:47.392 "data_offset": 2048, 00:09:47.392 "data_size": 63488 00:09:47.392 } 00:09:47.392 ] 00:09:47.392 }' 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.392 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.651 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:47.651 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:47.651 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.651 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.651 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.651 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.651 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.651 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.651 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.651 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.651 [2024-11-05 11:25:46.918629] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.910 11:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.910 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.910 "name": "raid_bdev1", 00:09:47.910 "aliases": [ 00:09:47.910 "6f24c917-c351-4160-b8c4-1801e97598e2" 00:09:47.910 ], 00:09:47.910 "product_name": "Raid Volume", 00:09:47.910 "block_size": 512, 00:09:47.910 "num_blocks": 63488, 00:09:47.910 "uuid": "6f24c917-c351-4160-b8c4-1801e97598e2", 00:09:47.910 "assigned_rate_limits": { 00:09:47.910 "rw_ios_per_sec": 0, 00:09:47.910 "rw_mbytes_per_sec": 0, 00:09:47.910 "r_mbytes_per_sec": 0, 00:09:47.910 "w_mbytes_per_sec": 0 00:09:47.910 }, 00:09:47.910 "claimed": false, 00:09:47.910 "zoned": false, 00:09:47.910 "supported_io_types": { 00:09:47.910 "read": true, 00:09:47.910 "write": true, 00:09:47.910 "unmap": false, 00:09:47.910 "flush": false, 00:09:47.910 "reset": true, 00:09:47.910 "nvme_admin": false, 00:09:47.910 "nvme_io": false, 00:09:47.910 "nvme_io_md": false, 00:09:47.910 "write_zeroes": true, 00:09:47.910 "zcopy": false, 00:09:47.910 "get_zone_info": false, 00:09:47.910 "zone_management": false, 00:09:47.910 "zone_append": false, 00:09:47.910 "compare": false, 00:09:47.910 "compare_and_write": false, 00:09:47.910 "abort": false, 00:09:47.910 "seek_hole": false, 00:09:47.910 "seek_data": false, 00:09:47.910 "copy": false, 00:09:47.910 "nvme_iov_md": false 00:09:47.910 }, 00:09:47.910 "memory_domains": [ 00:09:47.910 { 00:09:47.911 "dma_device_id": "system", 00:09:47.911 "dma_device_type": 1 00:09:47.911 }, 00:09:47.911 { 00:09:47.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.911 "dma_device_type": 2 00:09:47.911 }, 00:09:47.911 { 00:09:47.911 "dma_device_id": "system", 00:09:47.911 "dma_device_type": 1 00:09:47.911 }, 00:09:47.911 { 00:09:47.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.911 "dma_device_type": 2 00:09:47.911 } 00:09:47.911 ], 00:09:47.911 "driver_specific": { 00:09:47.911 "raid": { 00:09:47.911 "uuid": "6f24c917-c351-4160-b8c4-1801e97598e2", 00:09:47.911 "strip_size_kb": 0, 00:09:47.911 "state": "online", 00:09:47.911 "raid_level": "raid1", 00:09:47.911 "superblock": true, 00:09:47.911 "num_base_bdevs": 2, 00:09:47.911 "num_base_bdevs_discovered": 2, 00:09:47.911 "num_base_bdevs_operational": 2, 00:09:47.911 "base_bdevs_list": [ 00:09:47.911 { 00:09:47.911 "name": "pt1", 00:09:47.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.911 "is_configured": true, 00:09:47.911 "data_offset": 2048, 00:09:47.911 "data_size": 63488 00:09:47.911 }, 00:09:47.911 { 00:09:47.911 "name": "pt2", 00:09:47.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.911 "is_configured": true, 00:09:47.911 "data_offset": 2048, 00:09:47.911 "data_size": 63488 00:09:47.911 } 00:09:47.911 ] 00:09:47.911 } 00:09:47.911 } 00:09:47.911 }' 00:09:47.911 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:47.911 pt2' 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:47.911 [2024-11-05 11:25:47.154153] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6f24c917-c351-4160-b8c4-1801e97598e2 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6f24c917-c351-4160-b8c4-1801e97598e2 ']' 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.911 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.911 [2024-11-05 11:25:47.181831] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.911 [2024-11-05 11:25:47.181856] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.911 [2024-11-05 11:25:47.181935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.911 [2024-11-05 11:25:47.181993] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.911 [2024-11-05 11:25:47.182005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.171 [2024-11-05 11:25:47.337593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:48.171 [2024-11-05 11:25:47.339545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:48.171 [2024-11-05 11:25:47.339653] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:48.171 [2024-11-05 11:25:47.339744] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:48.171 [2024-11-05 11:25:47.339819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.171 [2024-11-05 11:25:47.339852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:48.171 request: 00:09:48.171 { 00:09:48.171 "name": "raid_bdev1", 00:09:48.171 "raid_level": "raid1", 00:09:48.171 "base_bdevs": [ 00:09:48.171 "malloc1", 00:09:48.171 "malloc2" 00:09:48.171 ], 00:09:48.171 "superblock": false, 00:09:48.171 "method": "bdev_raid_create", 00:09:48.171 "req_id": 1 00:09:48.171 } 00:09:48.171 Got JSON-RPC error response 00:09:48.171 response: 00:09:48.171 { 00:09:48.171 "code": -17, 00:09:48.171 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:48.171 } 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:48.171 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.172 [2024-11-05 11:25:47.405464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.172 [2024-11-05 11:25:47.405525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.172 [2024-11-05 11:25:47.405542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:48.172 [2024-11-05 11:25:47.405554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.172 [2024-11-05 11:25:47.407866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.172 [2024-11-05 11:25:47.407912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.172 [2024-11-05 11:25:47.408001] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:48.172 [2024-11-05 11:25:47.408072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.172 pt1 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.172 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.431 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.431 "name": "raid_bdev1", 00:09:48.431 "uuid": "6f24c917-c351-4160-b8c4-1801e97598e2", 00:09:48.431 "strip_size_kb": 0, 00:09:48.431 "state": "configuring", 00:09:48.431 "raid_level": "raid1", 00:09:48.431 "superblock": true, 00:09:48.431 "num_base_bdevs": 2, 00:09:48.431 "num_base_bdevs_discovered": 1, 00:09:48.431 "num_base_bdevs_operational": 2, 00:09:48.431 "base_bdevs_list": [ 00:09:48.431 { 00:09:48.431 "name": "pt1", 00:09:48.431 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.431 "is_configured": true, 00:09:48.431 "data_offset": 2048, 00:09:48.431 "data_size": 63488 00:09:48.431 }, 00:09:48.431 { 00:09:48.431 "name": null, 00:09:48.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.431 "is_configured": false, 00:09:48.431 "data_offset": 2048, 00:09:48.431 "data_size": 63488 00:09:48.431 } 00:09:48.431 ] 00:09:48.431 }' 00:09:48.431 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.431 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.692 [2024-11-05 11:25:47.856731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.692 [2024-11-05 11:25:47.856872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.692 [2024-11-05 11:25:47.856914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:48.692 [2024-11-05 11:25:47.856946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.692 [2024-11-05 11:25:47.857453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.692 [2024-11-05 11:25:47.857531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.692 [2024-11-05 11:25:47.857662] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:48.692 [2024-11-05 11:25:47.857719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.692 [2024-11-05 11:25:47.857871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:48.692 [2024-11-05 11:25:47.857911] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:48.692 [2024-11-05 11:25:47.858175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:48.692 [2024-11-05 11:25:47.858373] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:48.692 [2024-11-05 11:25:47.858414] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:48.692 [2024-11-05 11:25:47.858597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.692 pt2 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.692 "name": "raid_bdev1", 00:09:48.692 "uuid": "6f24c917-c351-4160-b8c4-1801e97598e2", 00:09:48.692 "strip_size_kb": 0, 00:09:48.692 "state": "online", 00:09:48.692 "raid_level": "raid1", 00:09:48.692 "superblock": true, 00:09:48.692 "num_base_bdevs": 2, 00:09:48.692 "num_base_bdevs_discovered": 2, 00:09:48.692 "num_base_bdevs_operational": 2, 00:09:48.692 "base_bdevs_list": [ 00:09:48.692 { 00:09:48.692 "name": "pt1", 00:09:48.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.692 "is_configured": true, 00:09:48.692 "data_offset": 2048, 00:09:48.692 "data_size": 63488 00:09:48.692 }, 00:09:48.692 { 00:09:48.692 "name": "pt2", 00:09:48.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.692 "is_configured": true, 00:09:48.692 "data_offset": 2048, 00:09:48.692 "data_size": 63488 00:09:48.692 } 00:09:48.692 ] 00:09:48.692 }' 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.692 11:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.260 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:49.260 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:49.260 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.260 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.260 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.260 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.260 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.260 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.260 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.260 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.260 [2024-11-05 11:25:48.340151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.260 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.260 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.260 "name": "raid_bdev1", 00:09:49.260 "aliases": [ 00:09:49.260 "6f24c917-c351-4160-b8c4-1801e97598e2" 00:09:49.260 ], 00:09:49.260 "product_name": "Raid Volume", 00:09:49.260 "block_size": 512, 00:09:49.260 "num_blocks": 63488, 00:09:49.260 "uuid": "6f24c917-c351-4160-b8c4-1801e97598e2", 00:09:49.260 "assigned_rate_limits": { 00:09:49.260 "rw_ios_per_sec": 0, 00:09:49.260 "rw_mbytes_per_sec": 0, 00:09:49.260 "r_mbytes_per_sec": 0, 00:09:49.260 "w_mbytes_per_sec": 0 00:09:49.260 }, 00:09:49.261 "claimed": false, 00:09:49.261 "zoned": false, 00:09:49.261 "supported_io_types": { 00:09:49.261 "read": true, 00:09:49.261 "write": true, 00:09:49.261 "unmap": false, 00:09:49.261 "flush": false, 00:09:49.261 "reset": true, 00:09:49.261 "nvme_admin": false, 00:09:49.261 "nvme_io": false, 00:09:49.261 "nvme_io_md": false, 00:09:49.261 "write_zeroes": true, 00:09:49.261 "zcopy": false, 00:09:49.261 "get_zone_info": false, 00:09:49.261 "zone_management": false, 00:09:49.261 "zone_append": false, 00:09:49.261 "compare": false, 00:09:49.261 "compare_and_write": false, 00:09:49.261 "abort": false, 00:09:49.261 "seek_hole": false, 00:09:49.261 "seek_data": false, 00:09:49.261 "copy": false, 00:09:49.261 "nvme_iov_md": false 00:09:49.261 }, 00:09:49.261 "memory_domains": [ 00:09:49.261 { 00:09:49.261 "dma_device_id": "system", 00:09:49.261 "dma_device_type": 1 00:09:49.261 }, 00:09:49.261 { 00:09:49.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.261 "dma_device_type": 2 00:09:49.261 }, 00:09:49.261 { 00:09:49.261 "dma_device_id": "system", 00:09:49.261 "dma_device_type": 1 00:09:49.261 }, 00:09:49.261 { 00:09:49.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.261 "dma_device_type": 2 00:09:49.261 } 00:09:49.261 ], 00:09:49.261 "driver_specific": { 00:09:49.261 "raid": { 00:09:49.261 "uuid": "6f24c917-c351-4160-b8c4-1801e97598e2", 00:09:49.261 "strip_size_kb": 0, 00:09:49.261 "state": "online", 00:09:49.261 "raid_level": "raid1", 00:09:49.261 "superblock": true, 00:09:49.261 "num_base_bdevs": 2, 00:09:49.261 "num_base_bdevs_discovered": 2, 00:09:49.261 "num_base_bdevs_operational": 2, 00:09:49.261 "base_bdevs_list": [ 00:09:49.261 { 00:09:49.261 "name": "pt1", 00:09:49.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.261 "is_configured": true, 00:09:49.261 "data_offset": 2048, 00:09:49.261 "data_size": 63488 00:09:49.261 }, 00:09:49.261 { 00:09:49.261 "name": "pt2", 00:09:49.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.261 "is_configured": true, 00:09:49.261 "data_offset": 2048, 00:09:49.261 "data_size": 63488 00:09:49.261 } 00:09:49.261 ] 00:09:49.261 } 00:09:49.261 } 00:09:49.261 }' 00:09:49.261 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.261 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:49.261 pt2' 00:09:49.261 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.261 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.261 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.261 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.261 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:49.261 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.261 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.261 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.261 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.261 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.261 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.520 [2024-11-05 11:25:48.611707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6f24c917-c351-4160-b8c4-1801e97598e2 '!=' 6f24c917-c351-4160-b8c4-1801e97598e2 ']' 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.520 [2024-11-05 11:25:48.655403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.520 "name": "raid_bdev1", 00:09:49.520 "uuid": "6f24c917-c351-4160-b8c4-1801e97598e2", 00:09:49.520 "strip_size_kb": 0, 00:09:49.520 "state": "online", 00:09:49.520 "raid_level": "raid1", 00:09:49.520 "superblock": true, 00:09:49.520 "num_base_bdevs": 2, 00:09:49.520 "num_base_bdevs_discovered": 1, 00:09:49.520 "num_base_bdevs_operational": 1, 00:09:49.520 "base_bdevs_list": [ 00:09:49.520 { 00:09:49.520 "name": null, 00:09:49.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.520 "is_configured": false, 00:09:49.520 "data_offset": 0, 00:09:49.520 "data_size": 63488 00:09:49.520 }, 00:09:49.520 { 00:09:49.520 "name": "pt2", 00:09:49.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.520 "is_configured": true, 00:09:49.520 "data_offset": 2048, 00:09:49.520 "data_size": 63488 00:09:49.520 } 00:09:49.520 ] 00:09:49.520 }' 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.520 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.091 [2024-11-05 11:25:49.118620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.091 [2024-11-05 11:25:49.118721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.091 [2024-11-05 11:25:49.118835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.091 [2024-11-05 11:25:49.118901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.091 [2024-11-05 11:25:49.118939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.091 [2024-11-05 11:25:49.190476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:50.091 [2024-11-05 11:25:49.190589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.091 [2024-11-05 11:25:49.190612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:50.091 [2024-11-05 11:25:49.190623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.091 [2024-11-05 11:25:49.192917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.091 [2024-11-05 11:25:49.192960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:50.091 [2024-11-05 11:25:49.193052] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:50.091 [2024-11-05 11:25:49.193105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.091 [2024-11-05 11:25:49.193242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:50.091 [2024-11-05 11:25:49.193265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:50.091 [2024-11-05 11:25:49.193484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:50.091 [2024-11-05 11:25:49.193620] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:50.091 [2024-11-05 11:25:49.193629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:50.091 [2024-11-05 11:25:49.193757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.091 pt2 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.091 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.091 "name": "raid_bdev1", 00:09:50.091 "uuid": "6f24c917-c351-4160-b8c4-1801e97598e2", 00:09:50.091 "strip_size_kb": 0, 00:09:50.091 "state": "online", 00:09:50.091 "raid_level": "raid1", 00:09:50.091 "superblock": true, 00:09:50.091 "num_base_bdevs": 2, 00:09:50.091 "num_base_bdevs_discovered": 1, 00:09:50.091 "num_base_bdevs_operational": 1, 00:09:50.091 "base_bdevs_list": [ 00:09:50.091 { 00:09:50.091 "name": null, 00:09:50.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.091 "is_configured": false, 00:09:50.091 "data_offset": 2048, 00:09:50.092 "data_size": 63488 00:09:50.092 }, 00:09:50.092 { 00:09:50.092 "name": "pt2", 00:09:50.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.092 "is_configured": true, 00:09:50.092 "data_offset": 2048, 00:09:50.092 "data_size": 63488 00:09:50.092 } 00:09:50.092 ] 00:09:50.092 }' 00:09:50.092 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.092 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.351 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:50.351 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.351 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.351 [2024-11-05 11:25:49.609730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.351 [2024-11-05 11:25:49.609824] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.351 [2024-11-05 11:25:49.609922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.351 [2024-11-05 11:25:49.609988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.351 [2024-11-05 11:25:49.610078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:50.351 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.351 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:50.351 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.351 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.351 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.609 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.609 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:50.609 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:50.609 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:50.609 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:50.609 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.609 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.610 [2024-11-05 11:25:49.661681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:50.610 [2024-11-05 11:25:49.661812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.610 [2024-11-05 11:25:49.661851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:50.610 [2024-11-05 11:25:49.661883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.610 [2024-11-05 11:25:49.664187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.610 [2024-11-05 11:25:49.664270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:50.610 [2024-11-05 11:25:49.664393] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:50.610 [2024-11-05 11:25:49.664466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:50.610 [2024-11-05 11:25:49.664668] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:50.610 [2024-11-05 11:25:49.664721] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.610 [2024-11-05 11:25:49.664783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:50.610 [2024-11-05 11:25:49.664897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.610 [2024-11-05 11:25:49.665017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:50.610 [2024-11-05 11:25:49.665054] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:50.610 [2024-11-05 11:25:49.665329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:50.610 [2024-11-05 11:25:49.665473] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:50.610 [2024-11-05 11:25:49.665487] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:50.610 [2024-11-05 11:25:49.665672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.610 pt1 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.610 "name": "raid_bdev1", 00:09:50.610 "uuid": "6f24c917-c351-4160-b8c4-1801e97598e2", 00:09:50.610 "strip_size_kb": 0, 00:09:50.610 "state": "online", 00:09:50.610 "raid_level": "raid1", 00:09:50.610 "superblock": true, 00:09:50.610 "num_base_bdevs": 2, 00:09:50.610 "num_base_bdevs_discovered": 1, 00:09:50.610 "num_base_bdevs_operational": 1, 00:09:50.610 "base_bdevs_list": [ 00:09:50.610 { 00:09:50.610 "name": null, 00:09:50.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.610 "is_configured": false, 00:09:50.610 "data_offset": 2048, 00:09:50.610 "data_size": 63488 00:09:50.610 }, 00:09:50.610 { 00:09:50.610 "name": "pt2", 00:09:50.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.610 "is_configured": true, 00:09:50.610 "data_offset": 2048, 00:09:50.610 "data_size": 63488 00:09:50.610 } 00:09:50.610 ] 00:09:50.610 }' 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.610 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.868 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:50.868 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.868 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:50.868 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.868 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:51.128 [2024-11-05 11:25:50.173106] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6f24c917-c351-4160-b8c4-1801e97598e2 '!=' 6f24c917-c351-4160-b8c4-1801e97598e2 ']' 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63335 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63335 ']' 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63335 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63335 00:09:51.128 killing process with pid 63335 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63335' 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63335 00:09:51.128 [2024-11-05 11:25:50.260081] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.128 [2024-11-05 11:25:50.260203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.128 [2024-11-05 11:25:50.260253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.128 [2024-11-05 11:25:50.260268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:51.128 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63335 00:09:51.388 [2024-11-05 11:25:50.472444] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.766 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:52.766 ************************************ 00:09:52.766 END TEST raid_superblock_test 00:09:52.766 ************************************ 00:09:52.766 00:09:52.766 real 0m6.210s 00:09:52.766 user 0m9.414s 00:09:52.766 sys 0m1.090s 00:09:52.766 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:52.766 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.766 11:25:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:52.766 11:25:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:52.766 11:25:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:52.766 11:25:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.766 ************************************ 00:09:52.766 START TEST raid_read_error_test 00:09:52.766 ************************************ 00:09:52.766 11:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:09:52.766 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9KAB6HvQ7z 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63665 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63665 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63665 ']' 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:52.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:52.767 11:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.767 [2024-11-05 11:25:51.798538] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:09:52.767 [2024-11-05 11:25:51.798819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63665 ] 00:09:52.767 [2024-11-05 11:25:51.987333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.026 [2024-11-05 11:25:52.104639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.285 [2024-11-05 11:25:52.305544] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.285 [2024-11-05 11:25:52.305603] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.545 BaseBdev1_malloc 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.545 true 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.545 [2024-11-05 11:25:52.737314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:53.545 [2024-11-05 11:25:52.737368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.545 [2024-11-05 11:25:52.737386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:53.545 [2024-11-05 11:25:52.737396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.545 [2024-11-05 11:25:52.739446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.545 [2024-11-05 11:25:52.739488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:53.545 BaseBdev1 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.545 BaseBdev2_malloc 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.545 true 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.545 [2024-11-05 11:25:52.801766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:53.545 [2024-11-05 11:25:52.801815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.545 [2024-11-05 11:25:52.801831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:53.545 [2024-11-05 11:25:52.801840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.545 [2024-11-05 11:25:52.803894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.545 [2024-11-05 11:25:52.803937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:53.545 BaseBdev2 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.545 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.545 [2024-11-05 11:25:52.813803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.545 [2024-11-05 11:25:52.815678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.545 [2024-11-05 11:25:52.815936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:53.545 [2024-11-05 11:25:52.815955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:53.545 [2024-11-05 11:25:52.816220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:53.545 [2024-11-05 11:25:52.816431] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:53.545 [2024-11-05 11:25:52.816443] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:53.545 [2024-11-05 11:25:52.816573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.805 "name": "raid_bdev1", 00:09:53.805 "uuid": "732e2c52-64c6-4296-b8ff-9242c0c7c602", 00:09:53.805 "strip_size_kb": 0, 00:09:53.805 "state": "online", 00:09:53.805 "raid_level": "raid1", 00:09:53.805 "superblock": true, 00:09:53.805 "num_base_bdevs": 2, 00:09:53.805 "num_base_bdevs_discovered": 2, 00:09:53.805 "num_base_bdevs_operational": 2, 00:09:53.805 "base_bdevs_list": [ 00:09:53.805 { 00:09:53.805 "name": "BaseBdev1", 00:09:53.805 "uuid": "b7b27847-c68d-575d-a48b-82e95aeb79fc", 00:09:53.805 "is_configured": true, 00:09:53.805 "data_offset": 2048, 00:09:53.805 "data_size": 63488 00:09:53.805 }, 00:09:53.805 { 00:09:53.805 "name": "BaseBdev2", 00:09:53.805 "uuid": "f3ade141-6bd3-5269-acb7-7120bdba60b3", 00:09:53.805 "is_configured": true, 00:09:53.805 "data_offset": 2048, 00:09:53.805 "data_size": 63488 00:09:53.805 } 00:09:53.805 ] 00:09:53.805 }' 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.805 11:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.065 11:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:54.065 11:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:54.324 [2024-11-05 11:25:53.370200] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.260 "name": "raid_bdev1", 00:09:55.260 "uuid": "732e2c52-64c6-4296-b8ff-9242c0c7c602", 00:09:55.260 "strip_size_kb": 0, 00:09:55.260 "state": "online", 00:09:55.260 "raid_level": "raid1", 00:09:55.260 "superblock": true, 00:09:55.260 "num_base_bdevs": 2, 00:09:55.260 "num_base_bdevs_discovered": 2, 00:09:55.260 "num_base_bdevs_operational": 2, 00:09:55.260 "base_bdevs_list": [ 00:09:55.260 { 00:09:55.260 "name": "BaseBdev1", 00:09:55.260 "uuid": "b7b27847-c68d-575d-a48b-82e95aeb79fc", 00:09:55.260 "is_configured": true, 00:09:55.260 "data_offset": 2048, 00:09:55.260 "data_size": 63488 00:09:55.260 }, 00:09:55.260 { 00:09:55.260 "name": "BaseBdev2", 00:09:55.260 "uuid": "f3ade141-6bd3-5269-acb7-7120bdba60b3", 00:09:55.260 "is_configured": true, 00:09:55.260 "data_offset": 2048, 00:09:55.260 "data_size": 63488 00:09:55.260 } 00:09:55.260 ] 00:09:55.260 }' 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.260 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.519 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:55.519 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.519 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.519 [2024-11-05 11:25:54.726033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.519 [2024-11-05 11:25:54.726163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.519 [2024-11-05 11:25:54.728794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.519 [2024-11-05 11:25:54.728878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.519 [2024-11-05 11:25:54.728988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.519 [2024-11-05 11:25:54.729026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:55.519 { 00:09:55.519 "results": [ 00:09:55.519 { 00:09:55.519 "job": "raid_bdev1", 00:09:55.519 "core_mask": "0x1", 00:09:55.519 "workload": "randrw", 00:09:55.519 "percentage": 50, 00:09:55.519 "status": "finished", 00:09:55.519 "queue_depth": 1, 00:09:55.519 "io_size": 131072, 00:09:55.519 "runtime": 1.356702, 00:09:55.519 "iops": 17777.66967248519, 00:09:55.519 "mibps": 2222.2087090606487, 00:09:55.519 "io_failed": 0, 00:09:55.519 "io_timeout": 0, 00:09:55.519 "avg_latency_us": 53.580035960705025, 00:09:55.519 "min_latency_us": 22.91703056768559, 00:09:55.519 "max_latency_us": 1445.2262008733624 00:09:55.519 } 00:09:55.519 ], 00:09:55.519 "core_count": 1 00:09:55.519 } 00:09:55.519 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.519 11:25:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63665 00:09:55.519 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63665 ']' 00:09:55.519 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63665 00:09:55.519 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:55.519 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:55.519 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63665 00:09:55.519 killing process with pid 63665 00:09:55.519 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:55.519 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:55.519 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63665' 00:09:55.519 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63665 00:09:55.519 [2024-11-05 11:25:54.776203] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.519 11:25:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63665 00:09:55.777 [2024-11-05 11:25:54.912620] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.154 11:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9KAB6HvQ7z 00:09:57.154 11:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:57.154 11:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:57.154 11:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:57.154 11:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:57.154 ************************************ 00:09:57.154 END TEST raid_read_error_test 00:09:57.154 ************************************ 00:09:57.154 11:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.154 11:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:57.154 11:25:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:57.154 00:09:57.154 real 0m4.398s 00:09:57.154 user 0m5.291s 00:09:57.154 sys 0m0.552s 00:09:57.154 11:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:57.154 11:25:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.154 11:25:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:57.154 11:25:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:57.154 11:25:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:57.154 11:25:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.154 ************************************ 00:09:57.154 START TEST raid_write_error_test 00:09:57.154 ************************************ 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NAOxRuxpHO 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63811 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63811 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 63811 ']' 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:57.154 11:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.154 [2024-11-05 11:25:56.247887] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:09:57.154 [2024-11-05 11:25:56.248067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63811 ] 00:09:57.154 [2024-11-05 11:25:56.424616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.413 [2024-11-05 11:25:56.536888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.672 [2024-11-05 11:25:56.733892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.672 [2024-11-05 11:25:56.734034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.930 BaseBdev1_malloc 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.930 true 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.930 [2024-11-05 11:25:57.159949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:57.930 [2024-11-05 11:25:57.160003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.930 [2024-11-05 11:25:57.160022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:57.930 [2024-11-05 11:25:57.160033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.930 [2024-11-05 11:25:57.162133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.930 [2024-11-05 11:25:57.162185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:57.930 BaseBdev1 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.930 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.187 BaseBdev2_malloc 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.188 true 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.188 [2024-11-05 11:25:57.225377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:58.188 [2024-11-05 11:25:57.225432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.188 [2024-11-05 11:25:57.225447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:58.188 [2024-11-05 11:25:57.225457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.188 [2024-11-05 11:25:57.227516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.188 [2024-11-05 11:25:57.227614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:58.188 BaseBdev2 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.188 [2024-11-05 11:25:57.237415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.188 [2024-11-05 11:25:57.239288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.188 [2024-11-05 11:25:57.239472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:58.188 [2024-11-05 11:25:57.239488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:58.188 [2024-11-05 11:25:57.239707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:58.188 [2024-11-05 11:25:57.239878] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:58.188 [2024-11-05 11:25:57.239888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:58.188 [2024-11-05 11:25:57.240073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.188 "name": "raid_bdev1", 00:09:58.188 "uuid": "392e97fe-2a89-4c5c-be6f-cc2ba295dd6c", 00:09:58.188 "strip_size_kb": 0, 00:09:58.188 "state": "online", 00:09:58.188 "raid_level": "raid1", 00:09:58.188 "superblock": true, 00:09:58.188 "num_base_bdevs": 2, 00:09:58.188 "num_base_bdevs_discovered": 2, 00:09:58.188 "num_base_bdevs_operational": 2, 00:09:58.188 "base_bdevs_list": [ 00:09:58.188 { 00:09:58.188 "name": "BaseBdev1", 00:09:58.188 "uuid": "dcf6472e-a562-5f2c-b2f2-f9d95c743396", 00:09:58.188 "is_configured": true, 00:09:58.188 "data_offset": 2048, 00:09:58.188 "data_size": 63488 00:09:58.188 }, 00:09:58.188 { 00:09:58.188 "name": "BaseBdev2", 00:09:58.188 "uuid": "70630c29-a165-5302-9dbf-a1a8089d7dc1", 00:09:58.188 "is_configured": true, 00:09:58.188 "data_offset": 2048, 00:09:58.188 "data_size": 63488 00:09:58.188 } 00:09:58.188 ] 00:09:58.188 }' 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.188 11:25:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.447 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:58.447 11:25:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:58.706 [2024-11-05 11:25:57.737873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.641 [2024-11-05 11:25:58.649837] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:59.641 [2024-11-05 11:25:58.649990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.641 [2024-11-05 11:25:58.650217] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.641 11:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.642 11:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.642 11:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.642 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.642 "name": "raid_bdev1", 00:09:59.642 "uuid": "392e97fe-2a89-4c5c-be6f-cc2ba295dd6c", 00:09:59.642 "strip_size_kb": 0, 00:09:59.642 "state": "online", 00:09:59.642 "raid_level": "raid1", 00:09:59.642 "superblock": true, 00:09:59.642 "num_base_bdevs": 2, 00:09:59.642 "num_base_bdevs_discovered": 1, 00:09:59.642 "num_base_bdevs_operational": 1, 00:09:59.642 "base_bdevs_list": [ 00:09:59.642 { 00:09:59.642 "name": null, 00:09:59.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.642 "is_configured": false, 00:09:59.642 "data_offset": 0, 00:09:59.642 "data_size": 63488 00:09:59.642 }, 00:09:59.642 { 00:09:59.642 "name": "BaseBdev2", 00:09:59.642 "uuid": "70630c29-a165-5302-9dbf-a1a8089d7dc1", 00:09:59.642 "is_configured": true, 00:09:59.642 "data_offset": 2048, 00:09:59.642 "data_size": 63488 00:09:59.642 } 00:09:59.642 ] 00:09:59.642 }' 00:09:59.642 11:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.642 11:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.901 11:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:59.901 11:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.901 11:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.901 [2024-11-05 11:25:59.130761] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:59.901 [2024-11-05 11:25:59.130903] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.901 [2024-11-05 11:25:59.133928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.901 [2024-11-05 11:25:59.134042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.901 [2024-11-05 11:25:59.134139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.901 [2024-11-05 11:25:59.134152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:59.901 { 00:09:59.901 "results": [ 00:09:59.901 { 00:09:59.901 "job": "raid_bdev1", 00:09:59.901 "core_mask": "0x1", 00:09:59.901 "workload": "randrw", 00:09:59.901 "percentage": 50, 00:09:59.901 "status": "finished", 00:09:59.901 "queue_depth": 1, 00:09:59.901 "io_size": 131072, 00:09:59.901 "runtime": 1.393927, 00:09:59.901 "iops": 21246.449778216505, 00:09:59.901 "mibps": 2655.806222277063, 00:09:59.901 "io_failed": 0, 00:09:59.901 "io_timeout": 0, 00:09:59.901 "avg_latency_us": 44.45650574810264, 00:09:59.901 "min_latency_us": 22.46986899563319, 00:09:59.901 "max_latency_us": 1373.6803493449781 00:09:59.901 } 00:09:59.901 ], 00:09:59.901 "core_count": 1 00:09:59.901 } 00:09:59.901 11:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.901 11:25:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63811 00:09:59.901 11:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 63811 ']' 00:09:59.901 11:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 63811 00:09:59.901 11:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:59.901 11:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:59.901 11:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63811 00:10:00.171 killing process with pid 63811 00:10:00.171 11:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:00.171 11:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:00.171 11:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63811' 00:10:00.171 11:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 63811 00:10:00.171 [2024-11-05 11:25:59.182402] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.171 11:25:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 63811 00:10:00.171 [2024-11-05 11:25:59.322693] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.565 11:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NAOxRuxpHO 00:10:01.565 11:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:01.565 11:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:01.565 11:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:01.565 11:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:01.565 11:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.565 11:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:01.565 ************************************ 00:10:01.565 END TEST raid_write_error_test 00:10:01.565 ************************************ 00:10:01.565 11:26:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:01.565 00:10:01.565 real 0m4.326s 00:10:01.565 user 0m5.154s 00:10:01.565 sys 0m0.584s 00:10:01.565 11:26:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:01.565 11:26:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.565 11:26:00 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:01.565 11:26:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:01.565 11:26:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:10:01.565 11:26:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:01.565 11:26:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:01.565 11:26:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.565 ************************************ 00:10:01.565 START TEST raid_state_function_test 00:10:01.565 ************************************ 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63949 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63949' 00:10:01.565 Process raid pid: 63949 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63949 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 63949 ']' 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:01.565 11:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.565 [2024-11-05 11:26:00.641973] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:10:01.565 [2024-11-05 11:26:00.642159] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.565 [2024-11-05 11:26:00.813434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.823 [2024-11-05 11:26:00.924220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.082 [2024-11-05 11:26:01.130604] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.082 [2024-11-05 11:26:01.130710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.342 [2024-11-05 11:26:01.471343] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.342 [2024-11-05 11:26:01.471488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.342 [2024-11-05 11:26:01.471519] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.342 [2024-11-05 11:26:01.471544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.342 [2024-11-05 11:26:01.471563] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.342 [2024-11-05 11:26:01.471584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.342 "name": "Existed_Raid", 00:10:02.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.342 "strip_size_kb": 64, 00:10:02.342 "state": "configuring", 00:10:02.342 "raid_level": "raid0", 00:10:02.342 "superblock": false, 00:10:02.342 "num_base_bdevs": 3, 00:10:02.342 "num_base_bdevs_discovered": 0, 00:10:02.342 "num_base_bdevs_operational": 3, 00:10:02.342 "base_bdevs_list": [ 00:10:02.342 { 00:10:02.342 "name": "BaseBdev1", 00:10:02.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.342 "is_configured": false, 00:10:02.342 "data_offset": 0, 00:10:02.342 "data_size": 0 00:10:02.342 }, 00:10:02.342 { 00:10:02.342 "name": "BaseBdev2", 00:10:02.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.342 "is_configured": false, 00:10:02.342 "data_offset": 0, 00:10:02.342 "data_size": 0 00:10:02.342 }, 00:10:02.342 { 00:10:02.342 "name": "BaseBdev3", 00:10:02.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.342 "is_configured": false, 00:10:02.342 "data_offset": 0, 00:10:02.342 "data_size": 0 00:10:02.342 } 00:10:02.342 ] 00:10:02.342 }' 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.342 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.910 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.910 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.910 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.910 [2024-11-05 11:26:01.938503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.910 [2024-11-05 11:26:01.938550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:02.910 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.910 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.910 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.910 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.910 [2024-11-05 11:26:01.950483] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.910 [2024-11-05 11:26:01.950604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.910 [2024-11-05 11:26:01.950632] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.910 [2024-11-05 11:26:01.950656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.910 [2024-11-05 11:26:01.950673] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.910 [2024-11-05 11:26:01.950693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.910 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.910 11:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:02.910 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.910 11:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.910 [2024-11-05 11:26:02.000208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.910 BaseBdev1 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.910 [ 00:10:02.910 { 00:10:02.910 "name": "BaseBdev1", 00:10:02.910 "aliases": [ 00:10:02.910 "3f8268b0-919c-4bfc-98a2-8fadbd6135f5" 00:10:02.910 ], 00:10:02.910 "product_name": "Malloc disk", 00:10:02.910 "block_size": 512, 00:10:02.910 "num_blocks": 65536, 00:10:02.910 "uuid": "3f8268b0-919c-4bfc-98a2-8fadbd6135f5", 00:10:02.910 "assigned_rate_limits": { 00:10:02.910 "rw_ios_per_sec": 0, 00:10:02.910 "rw_mbytes_per_sec": 0, 00:10:02.910 "r_mbytes_per_sec": 0, 00:10:02.910 "w_mbytes_per_sec": 0 00:10:02.910 }, 00:10:02.910 "claimed": true, 00:10:02.910 "claim_type": "exclusive_write", 00:10:02.910 "zoned": false, 00:10:02.910 "supported_io_types": { 00:10:02.910 "read": true, 00:10:02.910 "write": true, 00:10:02.910 "unmap": true, 00:10:02.910 "flush": true, 00:10:02.910 "reset": true, 00:10:02.910 "nvme_admin": false, 00:10:02.910 "nvme_io": false, 00:10:02.910 "nvme_io_md": false, 00:10:02.910 "write_zeroes": true, 00:10:02.910 "zcopy": true, 00:10:02.910 "get_zone_info": false, 00:10:02.910 "zone_management": false, 00:10:02.910 "zone_append": false, 00:10:02.910 "compare": false, 00:10:02.910 "compare_and_write": false, 00:10:02.910 "abort": true, 00:10:02.910 "seek_hole": false, 00:10:02.910 "seek_data": false, 00:10:02.910 "copy": true, 00:10:02.910 "nvme_iov_md": false 00:10:02.910 }, 00:10:02.910 "memory_domains": [ 00:10:02.910 { 00:10:02.910 "dma_device_id": "system", 00:10:02.910 "dma_device_type": 1 00:10:02.910 }, 00:10:02.910 { 00:10:02.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.910 "dma_device_type": 2 00:10:02.910 } 00:10:02.910 ], 00:10:02.910 "driver_specific": {} 00:10:02.910 } 00:10:02.910 ] 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.910 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.911 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.911 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.911 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.911 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.911 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.911 "name": "Existed_Raid", 00:10:02.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.911 "strip_size_kb": 64, 00:10:02.911 "state": "configuring", 00:10:02.911 "raid_level": "raid0", 00:10:02.911 "superblock": false, 00:10:02.911 "num_base_bdevs": 3, 00:10:02.911 "num_base_bdevs_discovered": 1, 00:10:02.911 "num_base_bdevs_operational": 3, 00:10:02.911 "base_bdevs_list": [ 00:10:02.911 { 00:10:02.911 "name": "BaseBdev1", 00:10:02.911 "uuid": "3f8268b0-919c-4bfc-98a2-8fadbd6135f5", 00:10:02.911 "is_configured": true, 00:10:02.911 "data_offset": 0, 00:10:02.911 "data_size": 65536 00:10:02.911 }, 00:10:02.911 { 00:10:02.911 "name": "BaseBdev2", 00:10:02.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.911 "is_configured": false, 00:10:02.911 "data_offset": 0, 00:10:02.911 "data_size": 0 00:10:02.911 }, 00:10:02.911 { 00:10:02.911 "name": "BaseBdev3", 00:10:02.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.911 "is_configured": false, 00:10:02.911 "data_offset": 0, 00:10:02.911 "data_size": 0 00:10:02.911 } 00:10:02.911 ] 00:10:02.911 }' 00:10:02.911 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.911 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.479 [2024-11-05 11:26:02.491420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.479 [2024-11-05 11:26:02.491488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.479 [2024-11-05 11:26:02.499461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.479 [2024-11-05 11:26:02.501337] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.479 [2024-11-05 11:26:02.501382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.479 [2024-11-05 11:26:02.501392] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.479 [2024-11-05 11:26:02.501401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.479 "name": "Existed_Raid", 00:10:03.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.479 "strip_size_kb": 64, 00:10:03.479 "state": "configuring", 00:10:03.479 "raid_level": "raid0", 00:10:03.479 "superblock": false, 00:10:03.479 "num_base_bdevs": 3, 00:10:03.479 "num_base_bdevs_discovered": 1, 00:10:03.479 "num_base_bdevs_operational": 3, 00:10:03.479 "base_bdevs_list": [ 00:10:03.479 { 00:10:03.479 "name": "BaseBdev1", 00:10:03.479 "uuid": "3f8268b0-919c-4bfc-98a2-8fadbd6135f5", 00:10:03.479 "is_configured": true, 00:10:03.479 "data_offset": 0, 00:10:03.479 "data_size": 65536 00:10:03.479 }, 00:10:03.479 { 00:10:03.479 "name": "BaseBdev2", 00:10:03.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.479 "is_configured": false, 00:10:03.479 "data_offset": 0, 00:10:03.479 "data_size": 0 00:10:03.479 }, 00:10:03.479 { 00:10:03.479 "name": "BaseBdev3", 00:10:03.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.479 "is_configured": false, 00:10:03.479 "data_offset": 0, 00:10:03.479 "data_size": 0 00:10:03.479 } 00:10:03.479 ] 00:10:03.479 }' 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.479 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.737 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.737 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.737 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.738 [2024-11-05 11:26:02.970959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.738 BaseBdev2 00:10:03.738 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.738 11:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:03.738 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:03.738 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:03.738 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:03.738 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:03.738 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:03.738 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:03.738 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.738 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.738 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.738 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.738 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.738 11:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.738 [ 00:10:03.738 { 00:10:03.738 "name": "BaseBdev2", 00:10:03.738 "aliases": [ 00:10:03.738 "f1b444fd-3949-4368-9020-be32d45ec188" 00:10:03.738 ], 00:10:03.738 "product_name": "Malloc disk", 00:10:03.738 "block_size": 512, 00:10:03.738 "num_blocks": 65536, 00:10:03.738 "uuid": "f1b444fd-3949-4368-9020-be32d45ec188", 00:10:03.738 "assigned_rate_limits": { 00:10:03.738 "rw_ios_per_sec": 0, 00:10:03.738 "rw_mbytes_per_sec": 0, 00:10:03.738 "r_mbytes_per_sec": 0, 00:10:03.738 "w_mbytes_per_sec": 0 00:10:03.738 }, 00:10:03.738 "claimed": true, 00:10:03.738 "claim_type": "exclusive_write", 00:10:03.738 "zoned": false, 00:10:03.738 "supported_io_types": { 00:10:03.738 "read": true, 00:10:03.738 "write": true, 00:10:03.738 "unmap": true, 00:10:03.738 "flush": true, 00:10:03.738 "reset": true, 00:10:03.738 "nvme_admin": false, 00:10:03.738 "nvme_io": false, 00:10:03.738 "nvme_io_md": false, 00:10:03.738 "write_zeroes": true, 00:10:03.738 "zcopy": true, 00:10:03.738 "get_zone_info": false, 00:10:03.738 "zone_management": false, 00:10:03.738 "zone_append": false, 00:10:03.738 "compare": false, 00:10:03.738 "compare_and_write": false, 00:10:03.738 "abort": true, 00:10:03.738 "seek_hole": false, 00:10:03.738 "seek_data": false, 00:10:03.738 "copy": true, 00:10:03.738 "nvme_iov_md": false 00:10:03.738 }, 00:10:03.738 "memory_domains": [ 00:10:03.738 { 00:10:03.738 "dma_device_id": "system", 00:10:03.738 "dma_device_type": 1 00:10:03.738 }, 00:10:03.738 { 00:10:03.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.738 "dma_device_type": 2 00:10:03.738 } 00:10:03.738 ], 00:10:03.738 "driver_specific": {} 00:10:03.738 } 00:10:03.738 ] 00:10:03.738 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.996 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:03.996 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.996 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.996 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:03.996 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.996 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.997 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.997 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.997 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.997 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.997 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.997 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.997 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.997 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.997 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.997 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.997 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.997 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.997 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.997 "name": "Existed_Raid", 00:10:03.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.997 "strip_size_kb": 64, 00:10:03.997 "state": "configuring", 00:10:03.997 "raid_level": "raid0", 00:10:03.997 "superblock": false, 00:10:03.997 "num_base_bdevs": 3, 00:10:03.997 "num_base_bdevs_discovered": 2, 00:10:03.997 "num_base_bdevs_operational": 3, 00:10:03.997 "base_bdevs_list": [ 00:10:03.997 { 00:10:03.997 "name": "BaseBdev1", 00:10:03.997 "uuid": "3f8268b0-919c-4bfc-98a2-8fadbd6135f5", 00:10:03.997 "is_configured": true, 00:10:03.997 "data_offset": 0, 00:10:03.997 "data_size": 65536 00:10:03.997 }, 00:10:03.997 { 00:10:03.997 "name": "BaseBdev2", 00:10:03.997 "uuid": "f1b444fd-3949-4368-9020-be32d45ec188", 00:10:03.997 "is_configured": true, 00:10:03.997 "data_offset": 0, 00:10:03.997 "data_size": 65536 00:10:03.997 }, 00:10:03.997 { 00:10:03.997 "name": "BaseBdev3", 00:10:03.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.997 "is_configured": false, 00:10:03.997 "data_offset": 0, 00:10:03.997 "data_size": 0 00:10:03.997 } 00:10:03.997 ] 00:10:03.997 }' 00:10:03.997 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.997 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.274 [2024-11-05 11:26:03.492236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.274 [2024-11-05 11:26:03.492397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:04.274 [2024-11-05 11:26:03.492424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:04.274 [2024-11-05 11:26:03.492756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:04.274 [2024-11-05 11:26:03.492935] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:04.274 [2024-11-05 11:26:03.492946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:04.274 [2024-11-05 11:26:03.493259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.274 BaseBdev3 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.274 [ 00:10:04.274 { 00:10:04.274 "name": "BaseBdev3", 00:10:04.274 "aliases": [ 00:10:04.274 "80c5db89-6a72-4529-a2df-66bb48659186" 00:10:04.274 ], 00:10:04.274 "product_name": "Malloc disk", 00:10:04.274 "block_size": 512, 00:10:04.274 "num_blocks": 65536, 00:10:04.274 "uuid": "80c5db89-6a72-4529-a2df-66bb48659186", 00:10:04.274 "assigned_rate_limits": { 00:10:04.274 "rw_ios_per_sec": 0, 00:10:04.274 "rw_mbytes_per_sec": 0, 00:10:04.274 "r_mbytes_per_sec": 0, 00:10:04.274 "w_mbytes_per_sec": 0 00:10:04.274 }, 00:10:04.274 "claimed": true, 00:10:04.274 "claim_type": "exclusive_write", 00:10:04.274 "zoned": false, 00:10:04.274 "supported_io_types": { 00:10:04.274 "read": true, 00:10:04.274 "write": true, 00:10:04.274 "unmap": true, 00:10:04.274 "flush": true, 00:10:04.274 "reset": true, 00:10:04.274 "nvme_admin": false, 00:10:04.274 "nvme_io": false, 00:10:04.274 "nvme_io_md": false, 00:10:04.274 "write_zeroes": true, 00:10:04.274 "zcopy": true, 00:10:04.274 "get_zone_info": false, 00:10:04.274 "zone_management": false, 00:10:04.274 "zone_append": false, 00:10:04.274 "compare": false, 00:10:04.274 "compare_and_write": false, 00:10:04.274 "abort": true, 00:10:04.274 "seek_hole": false, 00:10:04.274 "seek_data": false, 00:10:04.274 "copy": true, 00:10:04.274 "nvme_iov_md": false 00:10:04.274 }, 00:10:04.274 "memory_domains": [ 00:10:04.274 { 00:10:04.274 "dma_device_id": "system", 00:10:04.274 "dma_device_type": 1 00:10:04.274 }, 00:10:04.274 { 00:10:04.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.274 "dma_device_type": 2 00:10:04.274 } 00:10:04.274 ], 00:10:04.274 "driver_specific": {} 00:10:04.274 } 00:10:04.274 ] 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.274 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.275 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.275 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.275 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.275 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.275 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.275 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.275 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.544 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.544 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.544 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.544 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.544 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.544 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.544 "name": "Existed_Raid", 00:10:04.544 "uuid": "3fad3474-c315-49a0-b619-7197ead9743f", 00:10:04.544 "strip_size_kb": 64, 00:10:04.544 "state": "online", 00:10:04.544 "raid_level": "raid0", 00:10:04.544 "superblock": false, 00:10:04.544 "num_base_bdevs": 3, 00:10:04.544 "num_base_bdevs_discovered": 3, 00:10:04.544 "num_base_bdevs_operational": 3, 00:10:04.544 "base_bdevs_list": [ 00:10:04.544 { 00:10:04.544 "name": "BaseBdev1", 00:10:04.544 "uuid": "3f8268b0-919c-4bfc-98a2-8fadbd6135f5", 00:10:04.544 "is_configured": true, 00:10:04.544 "data_offset": 0, 00:10:04.544 "data_size": 65536 00:10:04.544 }, 00:10:04.544 { 00:10:04.544 "name": "BaseBdev2", 00:10:04.544 "uuid": "f1b444fd-3949-4368-9020-be32d45ec188", 00:10:04.544 "is_configured": true, 00:10:04.544 "data_offset": 0, 00:10:04.544 "data_size": 65536 00:10:04.544 }, 00:10:04.544 { 00:10:04.544 "name": "BaseBdev3", 00:10:04.544 "uuid": "80c5db89-6a72-4529-a2df-66bb48659186", 00:10:04.544 "is_configured": true, 00:10:04.544 "data_offset": 0, 00:10:04.544 "data_size": 65536 00:10:04.544 } 00:10:04.544 ] 00:10:04.544 }' 00:10:04.544 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.544 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.804 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.804 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:04.804 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.804 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.804 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.804 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.804 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.804 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.804 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.804 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.804 [2024-11-05 11:26:03.967919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.804 11:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.804 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.804 "name": "Existed_Raid", 00:10:04.804 "aliases": [ 00:10:04.804 "3fad3474-c315-49a0-b619-7197ead9743f" 00:10:04.804 ], 00:10:04.804 "product_name": "Raid Volume", 00:10:04.804 "block_size": 512, 00:10:04.804 "num_blocks": 196608, 00:10:04.804 "uuid": "3fad3474-c315-49a0-b619-7197ead9743f", 00:10:04.804 "assigned_rate_limits": { 00:10:04.804 "rw_ios_per_sec": 0, 00:10:04.804 "rw_mbytes_per_sec": 0, 00:10:04.804 "r_mbytes_per_sec": 0, 00:10:04.804 "w_mbytes_per_sec": 0 00:10:04.804 }, 00:10:04.804 "claimed": false, 00:10:04.804 "zoned": false, 00:10:04.804 "supported_io_types": { 00:10:04.804 "read": true, 00:10:04.804 "write": true, 00:10:04.804 "unmap": true, 00:10:04.804 "flush": true, 00:10:04.804 "reset": true, 00:10:04.804 "nvme_admin": false, 00:10:04.804 "nvme_io": false, 00:10:04.804 "nvme_io_md": false, 00:10:04.804 "write_zeroes": true, 00:10:04.804 "zcopy": false, 00:10:04.804 "get_zone_info": false, 00:10:04.804 "zone_management": false, 00:10:04.804 "zone_append": false, 00:10:04.804 "compare": false, 00:10:04.804 "compare_and_write": false, 00:10:04.804 "abort": false, 00:10:04.804 "seek_hole": false, 00:10:04.804 "seek_data": false, 00:10:04.804 "copy": false, 00:10:04.804 "nvme_iov_md": false 00:10:04.804 }, 00:10:04.804 "memory_domains": [ 00:10:04.804 { 00:10:04.804 "dma_device_id": "system", 00:10:04.804 "dma_device_type": 1 00:10:04.804 }, 00:10:04.804 { 00:10:04.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.804 "dma_device_type": 2 00:10:04.804 }, 00:10:04.804 { 00:10:04.804 "dma_device_id": "system", 00:10:04.804 "dma_device_type": 1 00:10:04.804 }, 00:10:04.804 { 00:10:04.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.804 "dma_device_type": 2 00:10:04.804 }, 00:10:04.804 { 00:10:04.804 "dma_device_id": "system", 00:10:04.804 "dma_device_type": 1 00:10:04.804 }, 00:10:04.804 { 00:10:04.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.804 "dma_device_type": 2 00:10:04.804 } 00:10:04.804 ], 00:10:04.804 "driver_specific": { 00:10:04.804 "raid": { 00:10:04.804 "uuid": "3fad3474-c315-49a0-b619-7197ead9743f", 00:10:04.804 "strip_size_kb": 64, 00:10:04.804 "state": "online", 00:10:04.804 "raid_level": "raid0", 00:10:04.804 "superblock": false, 00:10:04.804 "num_base_bdevs": 3, 00:10:04.804 "num_base_bdevs_discovered": 3, 00:10:04.804 "num_base_bdevs_operational": 3, 00:10:04.804 "base_bdevs_list": [ 00:10:04.804 { 00:10:04.804 "name": "BaseBdev1", 00:10:04.804 "uuid": "3f8268b0-919c-4bfc-98a2-8fadbd6135f5", 00:10:04.804 "is_configured": true, 00:10:04.804 "data_offset": 0, 00:10:04.804 "data_size": 65536 00:10:04.804 }, 00:10:04.804 { 00:10:04.804 "name": "BaseBdev2", 00:10:04.804 "uuid": "f1b444fd-3949-4368-9020-be32d45ec188", 00:10:04.804 "is_configured": true, 00:10:04.804 "data_offset": 0, 00:10:04.804 "data_size": 65536 00:10:04.804 }, 00:10:04.804 { 00:10:04.804 "name": "BaseBdev3", 00:10:04.804 "uuid": "80c5db89-6a72-4529-a2df-66bb48659186", 00:10:04.804 "is_configured": true, 00:10:04.804 "data_offset": 0, 00:10:04.804 "data_size": 65536 00:10:04.804 } 00:10:04.804 ] 00:10:04.804 } 00:10:04.804 } 00:10:04.804 }' 00:10:04.804 11:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.804 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:04.804 BaseBdev2 00:10:04.804 BaseBdev3' 00:10:04.804 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.064 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.064 [2024-11-05 11:26:04.247223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.064 [2024-11-05 11:26:04.247253] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.064 [2024-11-05 11:26:04.247307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.324 "name": "Existed_Raid", 00:10:05.324 "uuid": "3fad3474-c315-49a0-b619-7197ead9743f", 00:10:05.324 "strip_size_kb": 64, 00:10:05.324 "state": "offline", 00:10:05.324 "raid_level": "raid0", 00:10:05.324 "superblock": false, 00:10:05.324 "num_base_bdevs": 3, 00:10:05.324 "num_base_bdevs_discovered": 2, 00:10:05.324 "num_base_bdevs_operational": 2, 00:10:05.324 "base_bdevs_list": [ 00:10:05.324 { 00:10:05.324 "name": null, 00:10:05.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.324 "is_configured": false, 00:10:05.324 "data_offset": 0, 00:10:05.324 "data_size": 65536 00:10:05.324 }, 00:10:05.324 { 00:10:05.324 "name": "BaseBdev2", 00:10:05.324 "uuid": "f1b444fd-3949-4368-9020-be32d45ec188", 00:10:05.324 "is_configured": true, 00:10:05.324 "data_offset": 0, 00:10:05.324 "data_size": 65536 00:10:05.324 }, 00:10:05.324 { 00:10:05.324 "name": "BaseBdev3", 00:10:05.324 "uuid": "80c5db89-6a72-4529-a2df-66bb48659186", 00:10:05.324 "is_configured": true, 00:10:05.324 "data_offset": 0, 00:10:05.324 "data_size": 65536 00:10:05.324 } 00:10:05.324 ] 00:10:05.324 }' 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.324 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.584 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:05.584 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.584 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.584 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.584 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.584 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.584 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.584 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.584 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.584 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:05.584 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.584 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.584 [2024-11-05 11:26:04.823243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:05.843 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.843 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.843 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.843 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.843 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.843 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.843 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.843 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.843 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.843 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.843 11:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:05.843 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.843 11:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.843 [2024-11-05 11:26:04.979630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:05.843 [2024-11-05 11:26:04.979770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:05.843 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.843 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.843 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.843 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.843 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:05.843 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.843 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.843 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.102 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:06.102 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:06.102 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:06.102 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:06.102 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.102 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.102 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.102 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.102 BaseBdev2 00:10:06.102 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.103 [ 00:10:06.103 { 00:10:06.103 "name": "BaseBdev2", 00:10:06.103 "aliases": [ 00:10:06.103 "d5451845-bc50-4a46-a829-e6e345873500" 00:10:06.103 ], 00:10:06.103 "product_name": "Malloc disk", 00:10:06.103 "block_size": 512, 00:10:06.103 "num_blocks": 65536, 00:10:06.103 "uuid": "d5451845-bc50-4a46-a829-e6e345873500", 00:10:06.103 "assigned_rate_limits": { 00:10:06.103 "rw_ios_per_sec": 0, 00:10:06.103 "rw_mbytes_per_sec": 0, 00:10:06.103 "r_mbytes_per_sec": 0, 00:10:06.103 "w_mbytes_per_sec": 0 00:10:06.103 }, 00:10:06.103 "claimed": false, 00:10:06.103 "zoned": false, 00:10:06.103 "supported_io_types": { 00:10:06.103 "read": true, 00:10:06.103 "write": true, 00:10:06.103 "unmap": true, 00:10:06.103 "flush": true, 00:10:06.103 "reset": true, 00:10:06.103 "nvme_admin": false, 00:10:06.103 "nvme_io": false, 00:10:06.103 "nvme_io_md": false, 00:10:06.103 "write_zeroes": true, 00:10:06.103 "zcopy": true, 00:10:06.103 "get_zone_info": false, 00:10:06.103 "zone_management": false, 00:10:06.103 "zone_append": false, 00:10:06.103 "compare": false, 00:10:06.103 "compare_and_write": false, 00:10:06.103 "abort": true, 00:10:06.103 "seek_hole": false, 00:10:06.103 "seek_data": false, 00:10:06.103 "copy": true, 00:10:06.103 "nvme_iov_md": false 00:10:06.103 }, 00:10:06.103 "memory_domains": [ 00:10:06.103 { 00:10:06.103 "dma_device_id": "system", 00:10:06.103 "dma_device_type": 1 00:10:06.103 }, 00:10:06.103 { 00:10:06.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.103 "dma_device_type": 2 00:10:06.103 } 00:10:06.103 ], 00:10:06.103 "driver_specific": {} 00:10:06.103 } 00:10:06.103 ] 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.103 BaseBdev3 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.103 [ 00:10:06.103 { 00:10:06.103 "name": "BaseBdev3", 00:10:06.103 "aliases": [ 00:10:06.103 "0058b7de-ef5b-411e-bfc3-d4d4ce37bac8" 00:10:06.103 ], 00:10:06.103 "product_name": "Malloc disk", 00:10:06.103 "block_size": 512, 00:10:06.103 "num_blocks": 65536, 00:10:06.103 "uuid": "0058b7de-ef5b-411e-bfc3-d4d4ce37bac8", 00:10:06.103 "assigned_rate_limits": { 00:10:06.103 "rw_ios_per_sec": 0, 00:10:06.103 "rw_mbytes_per_sec": 0, 00:10:06.103 "r_mbytes_per_sec": 0, 00:10:06.103 "w_mbytes_per_sec": 0 00:10:06.103 }, 00:10:06.103 "claimed": false, 00:10:06.103 "zoned": false, 00:10:06.103 "supported_io_types": { 00:10:06.103 "read": true, 00:10:06.103 "write": true, 00:10:06.103 "unmap": true, 00:10:06.103 "flush": true, 00:10:06.103 "reset": true, 00:10:06.103 "nvme_admin": false, 00:10:06.103 "nvme_io": false, 00:10:06.103 "nvme_io_md": false, 00:10:06.103 "write_zeroes": true, 00:10:06.103 "zcopy": true, 00:10:06.103 "get_zone_info": false, 00:10:06.103 "zone_management": false, 00:10:06.103 "zone_append": false, 00:10:06.103 "compare": false, 00:10:06.103 "compare_and_write": false, 00:10:06.103 "abort": true, 00:10:06.103 "seek_hole": false, 00:10:06.103 "seek_data": false, 00:10:06.103 "copy": true, 00:10:06.103 "nvme_iov_md": false 00:10:06.103 }, 00:10:06.103 "memory_domains": [ 00:10:06.103 { 00:10:06.103 "dma_device_id": "system", 00:10:06.103 "dma_device_type": 1 00:10:06.103 }, 00:10:06.103 { 00:10:06.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.103 "dma_device_type": 2 00:10:06.103 } 00:10:06.103 ], 00:10:06.103 "driver_specific": {} 00:10:06.103 } 00:10:06.103 ] 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.103 [2024-11-05 11:26:05.299472] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.103 [2024-11-05 11:26:05.299563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.103 [2024-11-05 11:26:05.299612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.103 [2024-11-05 11:26:05.301441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.103 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.103 "name": "Existed_Raid", 00:10:06.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.103 "strip_size_kb": 64, 00:10:06.103 "state": "configuring", 00:10:06.103 "raid_level": "raid0", 00:10:06.103 "superblock": false, 00:10:06.103 "num_base_bdevs": 3, 00:10:06.103 "num_base_bdevs_discovered": 2, 00:10:06.103 "num_base_bdevs_operational": 3, 00:10:06.103 "base_bdevs_list": [ 00:10:06.103 { 00:10:06.103 "name": "BaseBdev1", 00:10:06.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.103 "is_configured": false, 00:10:06.103 "data_offset": 0, 00:10:06.103 "data_size": 0 00:10:06.103 }, 00:10:06.103 { 00:10:06.104 "name": "BaseBdev2", 00:10:06.104 "uuid": "d5451845-bc50-4a46-a829-e6e345873500", 00:10:06.104 "is_configured": true, 00:10:06.104 "data_offset": 0, 00:10:06.104 "data_size": 65536 00:10:06.104 }, 00:10:06.104 { 00:10:06.104 "name": "BaseBdev3", 00:10:06.104 "uuid": "0058b7de-ef5b-411e-bfc3-d4d4ce37bac8", 00:10:06.104 "is_configured": true, 00:10:06.104 "data_offset": 0, 00:10:06.104 "data_size": 65536 00:10:06.104 } 00:10:06.104 ] 00:10:06.104 }' 00:10:06.104 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.104 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.672 [2024-11-05 11:26:05.722977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.672 "name": "Existed_Raid", 00:10:06.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.672 "strip_size_kb": 64, 00:10:06.672 "state": "configuring", 00:10:06.672 "raid_level": "raid0", 00:10:06.672 "superblock": false, 00:10:06.672 "num_base_bdevs": 3, 00:10:06.672 "num_base_bdevs_discovered": 1, 00:10:06.672 "num_base_bdevs_operational": 3, 00:10:06.672 "base_bdevs_list": [ 00:10:06.672 { 00:10:06.672 "name": "BaseBdev1", 00:10:06.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.672 "is_configured": false, 00:10:06.672 "data_offset": 0, 00:10:06.672 "data_size": 0 00:10:06.672 }, 00:10:06.672 { 00:10:06.672 "name": null, 00:10:06.672 "uuid": "d5451845-bc50-4a46-a829-e6e345873500", 00:10:06.672 "is_configured": false, 00:10:06.672 "data_offset": 0, 00:10:06.672 "data_size": 65536 00:10:06.672 }, 00:10:06.672 { 00:10:06.672 "name": "BaseBdev3", 00:10:06.672 "uuid": "0058b7de-ef5b-411e-bfc3-d4d4ce37bac8", 00:10:06.672 "is_configured": true, 00:10:06.672 "data_offset": 0, 00:10:06.672 "data_size": 65536 00:10:06.672 } 00:10:06.672 ] 00:10:06.672 }' 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.672 11:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.931 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.931 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.931 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.931 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.931 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.931 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:06.931 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:06.931 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.931 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.190 [2024-11-05 11:26:06.211293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.190 BaseBdev1 00:10:07.190 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.190 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:07.190 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:07.190 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:07.190 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:07.190 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:07.190 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:07.190 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:07.190 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.190 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.190 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.190 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.190 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.190 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.190 [ 00:10:07.190 { 00:10:07.190 "name": "BaseBdev1", 00:10:07.190 "aliases": [ 00:10:07.190 "39d66054-bab6-4ed8-a8bd-18223cc64e59" 00:10:07.190 ], 00:10:07.190 "product_name": "Malloc disk", 00:10:07.190 "block_size": 512, 00:10:07.190 "num_blocks": 65536, 00:10:07.190 "uuid": "39d66054-bab6-4ed8-a8bd-18223cc64e59", 00:10:07.191 "assigned_rate_limits": { 00:10:07.191 "rw_ios_per_sec": 0, 00:10:07.191 "rw_mbytes_per_sec": 0, 00:10:07.191 "r_mbytes_per_sec": 0, 00:10:07.191 "w_mbytes_per_sec": 0 00:10:07.191 }, 00:10:07.191 "claimed": true, 00:10:07.191 "claim_type": "exclusive_write", 00:10:07.191 "zoned": false, 00:10:07.191 "supported_io_types": { 00:10:07.191 "read": true, 00:10:07.191 "write": true, 00:10:07.191 "unmap": true, 00:10:07.191 "flush": true, 00:10:07.191 "reset": true, 00:10:07.191 "nvme_admin": false, 00:10:07.191 "nvme_io": false, 00:10:07.191 "nvme_io_md": false, 00:10:07.191 "write_zeroes": true, 00:10:07.191 "zcopy": true, 00:10:07.191 "get_zone_info": false, 00:10:07.191 "zone_management": false, 00:10:07.191 "zone_append": false, 00:10:07.191 "compare": false, 00:10:07.191 "compare_and_write": false, 00:10:07.191 "abort": true, 00:10:07.191 "seek_hole": false, 00:10:07.191 "seek_data": false, 00:10:07.191 "copy": true, 00:10:07.191 "nvme_iov_md": false 00:10:07.191 }, 00:10:07.191 "memory_domains": [ 00:10:07.191 { 00:10:07.191 "dma_device_id": "system", 00:10:07.191 "dma_device_type": 1 00:10:07.191 }, 00:10:07.191 { 00:10:07.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.191 "dma_device_type": 2 00:10:07.191 } 00:10:07.191 ], 00:10:07.191 "driver_specific": {} 00:10:07.191 } 00:10:07.191 ] 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.191 "name": "Existed_Raid", 00:10:07.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.191 "strip_size_kb": 64, 00:10:07.191 "state": "configuring", 00:10:07.191 "raid_level": "raid0", 00:10:07.191 "superblock": false, 00:10:07.191 "num_base_bdevs": 3, 00:10:07.191 "num_base_bdevs_discovered": 2, 00:10:07.191 "num_base_bdevs_operational": 3, 00:10:07.191 "base_bdevs_list": [ 00:10:07.191 { 00:10:07.191 "name": "BaseBdev1", 00:10:07.191 "uuid": "39d66054-bab6-4ed8-a8bd-18223cc64e59", 00:10:07.191 "is_configured": true, 00:10:07.191 "data_offset": 0, 00:10:07.191 "data_size": 65536 00:10:07.191 }, 00:10:07.191 { 00:10:07.191 "name": null, 00:10:07.191 "uuid": "d5451845-bc50-4a46-a829-e6e345873500", 00:10:07.191 "is_configured": false, 00:10:07.191 "data_offset": 0, 00:10:07.191 "data_size": 65536 00:10:07.191 }, 00:10:07.191 { 00:10:07.191 "name": "BaseBdev3", 00:10:07.191 "uuid": "0058b7de-ef5b-411e-bfc3-d4d4ce37bac8", 00:10:07.191 "is_configured": true, 00:10:07.191 "data_offset": 0, 00:10:07.191 "data_size": 65536 00:10:07.191 } 00:10:07.191 ] 00:10:07.191 }' 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.191 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.759 [2024-11-05 11:26:06.782406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.759 "name": "Existed_Raid", 00:10:07.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.759 "strip_size_kb": 64, 00:10:07.759 "state": "configuring", 00:10:07.759 "raid_level": "raid0", 00:10:07.759 "superblock": false, 00:10:07.759 "num_base_bdevs": 3, 00:10:07.759 "num_base_bdevs_discovered": 1, 00:10:07.759 "num_base_bdevs_operational": 3, 00:10:07.759 "base_bdevs_list": [ 00:10:07.759 { 00:10:07.759 "name": "BaseBdev1", 00:10:07.759 "uuid": "39d66054-bab6-4ed8-a8bd-18223cc64e59", 00:10:07.759 "is_configured": true, 00:10:07.759 "data_offset": 0, 00:10:07.759 "data_size": 65536 00:10:07.759 }, 00:10:07.759 { 00:10:07.759 "name": null, 00:10:07.759 "uuid": "d5451845-bc50-4a46-a829-e6e345873500", 00:10:07.759 "is_configured": false, 00:10:07.759 "data_offset": 0, 00:10:07.759 "data_size": 65536 00:10:07.759 }, 00:10:07.759 { 00:10:07.759 "name": null, 00:10:07.759 "uuid": "0058b7de-ef5b-411e-bfc3-d4d4ce37bac8", 00:10:07.759 "is_configured": false, 00:10:07.759 "data_offset": 0, 00:10:07.759 "data_size": 65536 00:10:07.759 } 00:10:07.759 ] 00:10:07.759 }' 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.759 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.018 [2024-11-05 11:26:07.285605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.018 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.277 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.277 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.277 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.277 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.277 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.277 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.277 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.277 "name": "Existed_Raid", 00:10:08.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.277 "strip_size_kb": 64, 00:10:08.277 "state": "configuring", 00:10:08.277 "raid_level": "raid0", 00:10:08.277 "superblock": false, 00:10:08.277 "num_base_bdevs": 3, 00:10:08.277 "num_base_bdevs_discovered": 2, 00:10:08.277 "num_base_bdevs_operational": 3, 00:10:08.277 "base_bdevs_list": [ 00:10:08.277 { 00:10:08.277 "name": "BaseBdev1", 00:10:08.277 "uuid": "39d66054-bab6-4ed8-a8bd-18223cc64e59", 00:10:08.277 "is_configured": true, 00:10:08.277 "data_offset": 0, 00:10:08.277 "data_size": 65536 00:10:08.277 }, 00:10:08.277 { 00:10:08.277 "name": null, 00:10:08.277 "uuid": "d5451845-bc50-4a46-a829-e6e345873500", 00:10:08.277 "is_configured": false, 00:10:08.277 "data_offset": 0, 00:10:08.277 "data_size": 65536 00:10:08.277 }, 00:10:08.277 { 00:10:08.277 "name": "BaseBdev3", 00:10:08.277 "uuid": "0058b7de-ef5b-411e-bfc3-d4d4ce37bac8", 00:10:08.277 "is_configured": true, 00:10:08.277 "data_offset": 0, 00:10:08.277 "data_size": 65536 00:10:08.277 } 00:10:08.277 ] 00:10:08.277 }' 00:10:08.277 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.277 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.535 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.535 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.535 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.535 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.535 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.535 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:08.535 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.535 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.535 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.535 [2024-11-05 11:26:07.764837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.794 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.794 "name": "Existed_Raid", 00:10:08.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.794 "strip_size_kb": 64, 00:10:08.794 "state": "configuring", 00:10:08.794 "raid_level": "raid0", 00:10:08.794 "superblock": false, 00:10:08.794 "num_base_bdevs": 3, 00:10:08.794 "num_base_bdevs_discovered": 1, 00:10:08.794 "num_base_bdevs_operational": 3, 00:10:08.794 "base_bdevs_list": [ 00:10:08.794 { 00:10:08.795 "name": null, 00:10:08.795 "uuid": "39d66054-bab6-4ed8-a8bd-18223cc64e59", 00:10:08.795 "is_configured": false, 00:10:08.795 "data_offset": 0, 00:10:08.795 "data_size": 65536 00:10:08.795 }, 00:10:08.795 { 00:10:08.795 "name": null, 00:10:08.795 "uuid": "d5451845-bc50-4a46-a829-e6e345873500", 00:10:08.795 "is_configured": false, 00:10:08.795 "data_offset": 0, 00:10:08.795 "data_size": 65536 00:10:08.795 }, 00:10:08.795 { 00:10:08.795 "name": "BaseBdev3", 00:10:08.795 "uuid": "0058b7de-ef5b-411e-bfc3-d4d4ce37bac8", 00:10:08.795 "is_configured": true, 00:10:08.795 "data_offset": 0, 00:10:08.795 "data_size": 65536 00:10:08.795 } 00:10:08.795 ] 00:10:08.795 }' 00:10:08.795 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.795 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.054 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.054 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.054 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.054 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:09.054 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.054 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:09.054 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:09.054 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.054 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.313 [2024-11-05 11:26:08.330724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.313 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.313 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:09.313 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.313 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.313 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.313 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.314 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.314 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.314 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.314 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.314 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.314 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.314 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.314 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.314 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.314 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.314 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.314 "name": "Existed_Raid", 00:10:09.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.314 "strip_size_kb": 64, 00:10:09.314 "state": "configuring", 00:10:09.314 "raid_level": "raid0", 00:10:09.314 "superblock": false, 00:10:09.314 "num_base_bdevs": 3, 00:10:09.314 "num_base_bdevs_discovered": 2, 00:10:09.314 "num_base_bdevs_operational": 3, 00:10:09.314 "base_bdevs_list": [ 00:10:09.314 { 00:10:09.314 "name": null, 00:10:09.314 "uuid": "39d66054-bab6-4ed8-a8bd-18223cc64e59", 00:10:09.314 "is_configured": false, 00:10:09.314 "data_offset": 0, 00:10:09.314 "data_size": 65536 00:10:09.314 }, 00:10:09.314 { 00:10:09.314 "name": "BaseBdev2", 00:10:09.314 "uuid": "d5451845-bc50-4a46-a829-e6e345873500", 00:10:09.314 "is_configured": true, 00:10:09.314 "data_offset": 0, 00:10:09.314 "data_size": 65536 00:10:09.314 }, 00:10:09.314 { 00:10:09.314 "name": "BaseBdev3", 00:10:09.314 "uuid": "0058b7de-ef5b-411e-bfc3-d4d4ce37bac8", 00:10:09.314 "is_configured": true, 00:10:09.314 "data_offset": 0, 00:10:09.314 "data_size": 65536 00:10:09.314 } 00:10:09.314 ] 00:10:09.314 }' 00:10:09.314 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.314 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.573 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.573 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:09.573 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.573 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.573 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.573 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:09.573 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.573 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:09.573 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.573 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.573 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 39d66054-bab6-4ed8-a8bd-18223cc64e59 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.832 [2024-11-05 11:26:08.891900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:09.832 [2024-11-05 11:26:08.892011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:09.832 [2024-11-05 11:26:08.892027] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:09.832 [2024-11-05 11:26:08.892306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:09.832 [2024-11-05 11:26:08.892454] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:09.832 [2024-11-05 11:26:08.892463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:09.832 [2024-11-05 11:26:08.892696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.832 NewBaseBdev 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.832 [ 00:10:09.832 { 00:10:09.832 "name": "NewBaseBdev", 00:10:09.832 "aliases": [ 00:10:09.832 "39d66054-bab6-4ed8-a8bd-18223cc64e59" 00:10:09.832 ], 00:10:09.832 "product_name": "Malloc disk", 00:10:09.832 "block_size": 512, 00:10:09.832 "num_blocks": 65536, 00:10:09.832 "uuid": "39d66054-bab6-4ed8-a8bd-18223cc64e59", 00:10:09.832 "assigned_rate_limits": { 00:10:09.832 "rw_ios_per_sec": 0, 00:10:09.832 "rw_mbytes_per_sec": 0, 00:10:09.832 "r_mbytes_per_sec": 0, 00:10:09.832 "w_mbytes_per_sec": 0 00:10:09.832 }, 00:10:09.832 "claimed": true, 00:10:09.832 "claim_type": "exclusive_write", 00:10:09.832 "zoned": false, 00:10:09.832 "supported_io_types": { 00:10:09.832 "read": true, 00:10:09.832 "write": true, 00:10:09.832 "unmap": true, 00:10:09.832 "flush": true, 00:10:09.832 "reset": true, 00:10:09.832 "nvme_admin": false, 00:10:09.832 "nvme_io": false, 00:10:09.832 "nvme_io_md": false, 00:10:09.832 "write_zeroes": true, 00:10:09.832 "zcopy": true, 00:10:09.832 "get_zone_info": false, 00:10:09.832 "zone_management": false, 00:10:09.832 "zone_append": false, 00:10:09.832 "compare": false, 00:10:09.832 "compare_and_write": false, 00:10:09.832 "abort": true, 00:10:09.832 "seek_hole": false, 00:10:09.832 "seek_data": false, 00:10:09.832 "copy": true, 00:10:09.832 "nvme_iov_md": false 00:10:09.832 }, 00:10:09.832 "memory_domains": [ 00:10:09.832 { 00:10:09.832 "dma_device_id": "system", 00:10:09.832 "dma_device_type": 1 00:10:09.832 }, 00:10:09.832 { 00:10:09.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.832 "dma_device_type": 2 00:10:09.832 } 00:10:09.832 ], 00:10:09.832 "driver_specific": {} 00:10:09.832 } 00:10:09.832 ] 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.832 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.833 "name": "Existed_Raid", 00:10:09.833 "uuid": "4893019e-8b67-46dd-82c6-43fed31abcaa", 00:10:09.833 "strip_size_kb": 64, 00:10:09.833 "state": "online", 00:10:09.833 "raid_level": "raid0", 00:10:09.833 "superblock": false, 00:10:09.833 "num_base_bdevs": 3, 00:10:09.833 "num_base_bdevs_discovered": 3, 00:10:09.833 "num_base_bdevs_operational": 3, 00:10:09.833 "base_bdevs_list": [ 00:10:09.833 { 00:10:09.833 "name": "NewBaseBdev", 00:10:09.833 "uuid": "39d66054-bab6-4ed8-a8bd-18223cc64e59", 00:10:09.833 "is_configured": true, 00:10:09.833 "data_offset": 0, 00:10:09.833 "data_size": 65536 00:10:09.833 }, 00:10:09.833 { 00:10:09.833 "name": "BaseBdev2", 00:10:09.833 "uuid": "d5451845-bc50-4a46-a829-e6e345873500", 00:10:09.833 "is_configured": true, 00:10:09.833 "data_offset": 0, 00:10:09.833 "data_size": 65536 00:10:09.833 }, 00:10:09.833 { 00:10:09.833 "name": "BaseBdev3", 00:10:09.833 "uuid": "0058b7de-ef5b-411e-bfc3-d4d4ce37bac8", 00:10:09.833 "is_configured": true, 00:10:09.833 "data_offset": 0, 00:10:09.833 "data_size": 65536 00:10:09.833 } 00:10:09.833 ] 00:10:09.833 }' 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.833 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.092 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.092 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.092 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.092 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.092 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.092 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.352 [2024-11-05 11:26:09.375547] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.352 "name": "Existed_Raid", 00:10:10.352 "aliases": [ 00:10:10.352 "4893019e-8b67-46dd-82c6-43fed31abcaa" 00:10:10.352 ], 00:10:10.352 "product_name": "Raid Volume", 00:10:10.352 "block_size": 512, 00:10:10.352 "num_blocks": 196608, 00:10:10.352 "uuid": "4893019e-8b67-46dd-82c6-43fed31abcaa", 00:10:10.352 "assigned_rate_limits": { 00:10:10.352 "rw_ios_per_sec": 0, 00:10:10.352 "rw_mbytes_per_sec": 0, 00:10:10.352 "r_mbytes_per_sec": 0, 00:10:10.352 "w_mbytes_per_sec": 0 00:10:10.352 }, 00:10:10.352 "claimed": false, 00:10:10.352 "zoned": false, 00:10:10.352 "supported_io_types": { 00:10:10.352 "read": true, 00:10:10.352 "write": true, 00:10:10.352 "unmap": true, 00:10:10.352 "flush": true, 00:10:10.352 "reset": true, 00:10:10.352 "nvme_admin": false, 00:10:10.352 "nvme_io": false, 00:10:10.352 "nvme_io_md": false, 00:10:10.352 "write_zeroes": true, 00:10:10.352 "zcopy": false, 00:10:10.352 "get_zone_info": false, 00:10:10.352 "zone_management": false, 00:10:10.352 "zone_append": false, 00:10:10.352 "compare": false, 00:10:10.352 "compare_and_write": false, 00:10:10.352 "abort": false, 00:10:10.352 "seek_hole": false, 00:10:10.352 "seek_data": false, 00:10:10.352 "copy": false, 00:10:10.352 "nvme_iov_md": false 00:10:10.352 }, 00:10:10.352 "memory_domains": [ 00:10:10.352 { 00:10:10.352 "dma_device_id": "system", 00:10:10.352 "dma_device_type": 1 00:10:10.352 }, 00:10:10.352 { 00:10:10.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.352 "dma_device_type": 2 00:10:10.352 }, 00:10:10.352 { 00:10:10.352 "dma_device_id": "system", 00:10:10.352 "dma_device_type": 1 00:10:10.352 }, 00:10:10.352 { 00:10:10.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.352 "dma_device_type": 2 00:10:10.352 }, 00:10:10.352 { 00:10:10.352 "dma_device_id": "system", 00:10:10.352 "dma_device_type": 1 00:10:10.352 }, 00:10:10.352 { 00:10:10.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.352 "dma_device_type": 2 00:10:10.352 } 00:10:10.352 ], 00:10:10.352 "driver_specific": { 00:10:10.352 "raid": { 00:10:10.352 "uuid": "4893019e-8b67-46dd-82c6-43fed31abcaa", 00:10:10.352 "strip_size_kb": 64, 00:10:10.352 "state": "online", 00:10:10.352 "raid_level": "raid0", 00:10:10.352 "superblock": false, 00:10:10.352 "num_base_bdevs": 3, 00:10:10.352 "num_base_bdevs_discovered": 3, 00:10:10.352 "num_base_bdevs_operational": 3, 00:10:10.352 "base_bdevs_list": [ 00:10:10.352 { 00:10:10.352 "name": "NewBaseBdev", 00:10:10.352 "uuid": "39d66054-bab6-4ed8-a8bd-18223cc64e59", 00:10:10.352 "is_configured": true, 00:10:10.352 "data_offset": 0, 00:10:10.352 "data_size": 65536 00:10:10.352 }, 00:10:10.352 { 00:10:10.352 "name": "BaseBdev2", 00:10:10.352 "uuid": "d5451845-bc50-4a46-a829-e6e345873500", 00:10:10.352 "is_configured": true, 00:10:10.352 "data_offset": 0, 00:10:10.352 "data_size": 65536 00:10:10.352 }, 00:10:10.352 { 00:10:10.352 "name": "BaseBdev3", 00:10:10.352 "uuid": "0058b7de-ef5b-411e-bfc3-d4d4ce37bac8", 00:10:10.352 "is_configured": true, 00:10:10.352 "data_offset": 0, 00:10:10.352 "data_size": 65536 00:10:10.352 } 00:10:10.352 ] 00:10:10.352 } 00:10:10.352 } 00:10:10.352 }' 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:10.352 BaseBdev2 00:10:10.352 BaseBdev3' 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.352 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.611 [2024-11-05 11:26:09.670814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.611 [2024-11-05 11:26:09.670849] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.611 [2024-11-05 11:26:09.670940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.611 [2024-11-05 11:26:09.671024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.611 [2024-11-05 11:26:09.671039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63949 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 63949 ']' 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 63949 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63949 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63949' 00:10:10.611 killing process with pid 63949 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 63949 00:10:10.611 [2024-11-05 11:26:09.719224] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.611 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 63949 00:10:10.868 [2024-11-05 11:26:10.026676] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:12.248 00:10:12.248 real 0m10.580s 00:10:12.248 user 0m16.774s 00:10:12.248 sys 0m1.918s 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.248 ************************************ 00:10:12.248 END TEST raid_state_function_test 00:10:12.248 ************************************ 00:10:12.248 11:26:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:12.248 11:26:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:12.248 11:26:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:12.248 11:26:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.248 ************************************ 00:10:12.248 START TEST raid_state_function_test_sb 00:10:12.248 ************************************ 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:12.248 Process raid pid: 64570 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64570 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64570' 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64570 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64570 ']' 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:12.248 11:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.248 [2024-11-05 11:26:11.293506] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:10:12.248 [2024-11-05 11:26:11.293674] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.248 [2024-11-05 11:26:11.466163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.506 [2024-11-05 11:26:11.583860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.765 [2024-11-05 11:26:11.785907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.765 [2024-11-05 11:26:11.785948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.024 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:13.024 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:13.024 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:13.024 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.024 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.024 [2024-11-05 11:26:12.148095] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.024 [2024-11-05 11:26:12.148158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.024 [2024-11-05 11:26:12.148169] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.024 [2024-11-05 11:26:12.148178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.024 [2024-11-05 11:26:12.148185] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.024 [2024-11-05 11:26:12.148193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.024 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.024 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:13.024 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.024 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.024 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.024 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.024 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.025 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.025 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.025 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.025 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.025 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.025 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.025 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.025 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.025 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.025 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.025 "name": "Existed_Raid", 00:10:13.025 "uuid": "974b5eea-bb74-4385-8b51-3439b539b1cd", 00:10:13.025 "strip_size_kb": 64, 00:10:13.025 "state": "configuring", 00:10:13.025 "raid_level": "raid0", 00:10:13.025 "superblock": true, 00:10:13.025 "num_base_bdevs": 3, 00:10:13.025 "num_base_bdevs_discovered": 0, 00:10:13.025 "num_base_bdevs_operational": 3, 00:10:13.025 "base_bdevs_list": [ 00:10:13.025 { 00:10:13.025 "name": "BaseBdev1", 00:10:13.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.025 "is_configured": false, 00:10:13.025 "data_offset": 0, 00:10:13.025 "data_size": 0 00:10:13.025 }, 00:10:13.025 { 00:10:13.025 "name": "BaseBdev2", 00:10:13.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.025 "is_configured": false, 00:10:13.025 "data_offset": 0, 00:10:13.025 "data_size": 0 00:10:13.025 }, 00:10:13.025 { 00:10:13.025 "name": "BaseBdev3", 00:10:13.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.025 "is_configured": false, 00:10:13.025 "data_offset": 0, 00:10:13.025 "data_size": 0 00:10:13.025 } 00:10:13.025 ] 00:10:13.025 }' 00:10:13.025 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.025 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.593 [2024-11-05 11:26:12.623219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.593 [2024-11-05 11:26:12.623303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.593 [2024-11-05 11:26:12.631212] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.593 [2024-11-05 11:26:12.631292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.593 [2024-11-05 11:26:12.631319] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.593 [2024-11-05 11:26:12.631342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.593 [2024-11-05 11:26:12.631360] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.593 [2024-11-05 11:26:12.631382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.593 [2024-11-05 11:26:12.674413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.593 BaseBdev1 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.593 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.593 [ 00:10:13.593 { 00:10:13.593 "name": "BaseBdev1", 00:10:13.593 "aliases": [ 00:10:13.593 "7250b2f5-a98e-4445-984f-117655ff8c4a" 00:10:13.593 ], 00:10:13.593 "product_name": "Malloc disk", 00:10:13.593 "block_size": 512, 00:10:13.593 "num_blocks": 65536, 00:10:13.593 "uuid": "7250b2f5-a98e-4445-984f-117655ff8c4a", 00:10:13.593 "assigned_rate_limits": { 00:10:13.593 "rw_ios_per_sec": 0, 00:10:13.593 "rw_mbytes_per_sec": 0, 00:10:13.593 "r_mbytes_per_sec": 0, 00:10:13.593 "w_mbytes_per_sec": 0 00:10:13.593 }, 00:10:13.593 "claimed": true, 00:10:13.593 "claim_type": "exclusive_write", 00:10:13.593 "zoned": false, 00:10:13.593 "supported_io_types": { 00:10:13.593 "read": true, 00:10:13.593 "write": true, 00:10:13.593 "unmap": true, 00:10:13.593 "flush": true, 00:10:13.593 "reset": true, 00:10:13.593 "nvme_admin": false, 00:10:13.593 "nvme_io": false, 00:10:13.593 "nvme_io_md": false, 00:10:13.593 "write_zeroes": true, 00:10:13.594 "zcopy": true, 00:10:13.594 "get_zone_info": false, 00:10:13.594 "zone_management": false, 00:10:13.594 "zone_append": false, 00:10:13.594 "compare": false, 00:10:13.594 "compare_and_write": false, 00:10:13.594 "abort": true, 00:10:13.594 "seek_hole": false, 00:10:13.594 "seek_data": false, 00:10:13.594 "copy": true, 00:10:13.594 "nvme_iov_md": false 00:10:13.594 }, 00:10:13.594 "memory_domains": [ 00:10:13.594 { 00:10:13.594 "dma_device_id": "system", 00:10:13.594 "dma_device_type": 1 00:10:13.594 }, 00:10:13.594 { 00:10:13.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.594 "dma_device_type": 2 00:10:13.594 } 00:10:13.594 ], 00:10:13.594 "driver_specific": {} 00:10:13.594 } 00:10:13.594 ] 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.594 "name": "Existed_Raid", 00:10:13.594 "uuid": "e1cef032-eb83-4604-8932-aa58fa92d2d2", 00:10:13.594 "strip_size_kb": 64, 00:10:13.594 "state": "configuring", 00:10:13.594 "raid_level": "raid0", 00:10:13.594 "superblock": true, 00:10:13.594 "num_base_bdevs": 3, 00:10:13.594 "num_base_bdevs_discovered": 1, 00:10:13.594 "num_base_bdevs_operational": 3, 00:10:13.594 "base_bdevs_list": [ 00:10:13.594 { 00:10:13.594 "name": "BaseBdev1", 00:10:13.594 "uuid": "7250b2f5-a98e-4445-984f-117655ff8c4a", 00:10:13.594 "is_configured": true, 00:10:13.594 "data_offset": 2048, 00:10:13.594 "data_size": 63488 00:10:13.594 }, 00:10:13.594 { 00:10:13.594 "name": "BaseBdev2", 00:10:13.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.594 "is_configured": false, 00:10:13.594 "data_offset": 0, 00:10:13.594 "data_size": 0 00:10:13.594 }, 00:10:13.594 { 00:10:13.594 "name": "BaseBdev3", 00:10:13.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.594 "is_configured": false, 00:10:13.594 "data_offset": 0, 00:10:13.594 "data_size": 0 00:10:13.594 } 00:10:13.594 ] 00:10:13.594 }' 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.594 11:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.163 [2024-11-05 11:26:13.189585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.163 [2024-11-05 11:26:13.189650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.163 [2024-11-05 11:26:13.197624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.163 [2024-11-05 11:26:13.199407] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.163 [2024-11-05 11:26:13.199447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.163 [2024-11-05 11:26:13.199457] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.163 [2024-11-05 11:26:13.199467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.163 "name": "Existed_Raid", 00:10:14.163 "uuid": "2bfedba9-2fcb-4b7a-98a1-fe8d1d3b898a", 00:10:14.163 "strip_size_kb": 64, 00:10:14.163 "state": "configuring", 00:10:14.163 "raid_level": "raid0", 00:10:14.163 "superblock": true, 00:10:14.163 "num_base_bdevs": 3, 00:10:14.163 "num_base_bdevs_discovered": 1, 00:10:14.163 "num_base_bdevs_operational": 3, 00:10:14.163 "base_bdevs_list": [ 00:10:14.163 { 00:10:14.163 "name": "BaseBdev1", 00:10:14.163 "uuid": "7250b2f5-a98e-4445-984f-117655ff8c4a", 00:10:14.163 "is_configured": true, 00:10:14.163 "data_offset": 2048, 00:10:14.163 "data_size": 63488 00:10:14.163 }, 00:10:14.163 { 00:10:14.163 "name": "BaseBdev2", 00:10:14.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.163 "is_configured": false, 00:10:14.163 "data_offset": 0, 00:10:14.163 "data_size": 0 00:10:14.163 }, 00:10:14.163 { 00:10:14.163 "name": "BaseBdev3", 00:10:14.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.163 "is_configured": false, 00:10:14.163 "data_offset": 0, 00:10:14.163 "data_size": 0 00:10:14.163 } 00:10:14.163 ] 00:10:14.163 }' 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.163 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.423 [2024-11-05 11:26:13.674665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.423 BaseBdev2 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.423 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.423 [ 00:10:14.423 { 00:10:14.423 "name": "BaseBdev2", 00:10:14.423 "aliases": [ 00:10:14.423 "a654870a-d11d-4f9d-a3b5-4ad86d150763" 00:10:14.423 ], 00:10:14.423 "product_name": "Malloc disk", 00:10:14.682 "block_size": 512, 00:10:14.682 "num_blocks": 65536, 00:10:14.682 "uuid": "a654870a-d11d-4f9d-a3b5-4ad86d150763", 00:10:14.682 "assigned_rate_limits": { 00:10:14.682 "rw_ios_per_sec": 0, 00:10:14.682 "rw_mbytes_per_sec": 0, 00:10:14.682 "r_mbytes_per_sec": 0, 00:10:14.682 "w_mbytes_per_sec": 0 00:10:14.682 }, 00:10:14.682 "claimed": true, 00:10:14.682 "claim_type": "exclusive_write", 00:10:14.682 "zoned": false, 00:10:14.682 "supported_io_types": { 00:10:14.682 "read": true, 00:10:14.682 "write": true, 00:10:14.682 "unmap": true, 00:10:14.682 "flush": true, 00:10:14.682 "reset": true, 00:10:14.682 "nvme_admin": false, 00:10:14.682 "nvme_io": false, 00:10:14.682 "nvme_io_md": false, 00:10:14.682 "write_zeroes": true, 00:10:14.682 "zcopy": true, 00:10:14.682 "get_zone_info": false, 00:10:14.682 "zone_management": false, 00:10:14.682 "zone_append": false, 00:10:14.682 "compare": false, 00:10:14.682 "compare_and_write": false, 00:10:14.682 "abort": true, 00:10:14.682 "seek_hole": false, 00:10:14.682 "seek_data": false, 00:10:14.682 "copy": true, 00:10:14.682 "nvme_iov_md": false 00:10:14.682 }, 00:10:14.682 "memory_domains": [ 00:10:14.682 { 00:10:14.682 "dma_device_id": "system", 00:10:14.682 "dma_device_type": 1 00:10:14.682 }, 00:10:14.682 { 00:10:14.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.682 "dma_device_type": 2 00:10:14.682 } 00:10:14.682 ], 00:10:14.682 "driver_specific": {} 00:10:14.682 } 00:10:14.682 ] 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.682 "name": "Existed_Raid", 00:10:14.682 "uuid": "2bfedba9-2fcb-4b7a-98a1-fe8d1d3b898a", 00:10:14.682 "strip_size_kb": 64, 00:10:14.682 "state": "configuring", 00:10:14.682 "raid_level": "raid0", 00:10:14.682 "superblock": true, 00:10:14.682 "num_base_bdevs": 3, 00:10:14.682 "num_base_bdevs_discovered": 2, 00:10:14.682 "num_base_bdevs_operational": 3, 00:10:14.682 "base_bdevs_list": [ 00:10:14.682 { 00:10:14.682 "name": "BaseBdev1", 00:10:14.682 "uuid": "7250b2f5-a98e-4445-984f-117655ff8c4a", 00:10:14.682 "is_configured": true, 00:10:14.682 "data_offset": 2048, 00:10:14.682 "data_size": 63488 00:10:14.682 }, 00:10:14.682 { 00:10:14.682 "name": "BaseBdev2", 00:10:14.682 "uuid": "a654870a-d11d-4f9d-a3b5-4ad86d150763", 00:10:14.682 "is_configured": true, 00:10:14.682 "data_offset": 2048, 00:10:14.682 "data_size": 63488 00:10:14.682 }, 00:10:14.682 { 00:10:14.682 "name": "BaseBdev3", 00:10:14.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.682 "is_configured": false, 00:10:14.682 "data_offset": 0, 00:10:14.682 "data_size": 0 00:10:14.682 } 00:10:14.682 ] 00:10:14.682 }' 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.682 11:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.942 [2024-11-05 11:26:14.179400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.942 [2024-11-05 11:26:14.179758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:14.942 [2024-11-05 11:26:14.179819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:14.942 [2024-11-05 11:26:14.180152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:14.942 BaseBdev3 00:10:14.942 [2024-11-05 11:26:14.180378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:14.942 [2024-11-05 11:26:14.180390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:14.942 [2024-11-05 11:26:14.180556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.942 [ 00:10:14.942 { 00:10:14.942 "name": "BaseBdev3", 00:10:14.942 "aliases": [ 00:10:14.942 "99a3d609-a69a-4fda-b068-76b9c18bffb7" 00:10:14.942 ], 00:10:14.942 "product_name": "Malloc disk", 00:10:14.942 "block_size": 512, 00:10:14.942 "num_blocks": 65536, 00:10:14.942 "uuid": "99a3d609-a69a-4fda-b068-76b9c18bffb7", 00:10:14.942 "assigned_rate_limits": { 00:10:14.942 "rw_ios_per_sec": 0, 00:10:14.942 "rw_mbytes_per_sec": 0, 00:10:14.942 "r_mbytes_per_sec": 0, 00:10:14.942 "w_mbytes_per_sec": 0 00:10:14.942 }, 00:10:14.942 "claimed": true, 00:10:14.942 "claim_type": "exclusive_write", 00:10:14.942 "zoned": false, 00:10:14.942 "supported_io_types": { 00:10:14.942 "read": true, 00:10:14.942 "write": true, 00:10:14.942 "unmap": true, 00:10:14.942 "flush": true, 00:10:14.942 "reset": true, 00:10:14.942 "nvme_admin": false, 00:10:14.942 "nvme_io": false, 00:10:14.942 "nvme_io_md": false, 00:10:14.942 "write_zeroes": true, 00:10:14.942 "zcopy": true, 00:10:14.942 "get_zone_info": false, 00:10:14.942 "zone_management": false, 00:10:14.942 "zone_append": false, 00:10:14.942 "compare": false, 00:10:14.942 "compare_and_write": false, 00:10:14.942 "abort": true, 00:10:14.942 "seek_hole": false, 00:10:14.942 "seek_data": false, 00:10:14.942 "copy": true, 00:10:14.942 "nvme_iov_md": false 00:10:14.942 }, 00:10:14.942 "memory_domains": [ 00:10:14.942 { 00:10:14.942 "dma_device_id": "system", 00:10:14.942 "dma_device_type": 1 00:10:14.942 }, 00:10:14.942 { 00:10:14.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.942 "dma_device_type": 2 00:10:14.942 } 00:10:14.942 ], 00:10:14.942 "driver_specific": {} 00:10:14.942 } 00:10:14.942 ] 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.942 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.201 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.201 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.201 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.201 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.201 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.201 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.201 "name": "Existed_Raid", 00:10:15.201 "uuid": "2bfedba9-2fcb-4b7a-98a1-fe8d1d3b898a", 00:10:15.201 "strip_size_kb": 64, 00:10:15.201 "state": "online", 00:10:15.201 "raid_level": "raid0", 00:10:15.201 "superblock": true, 00:10:15.201 "num_base_bdevs": 3, 00:10:15.201 "num_base_bdevs_discovered": 3, 00:10:15.201 "num_base_bdevs_operational": 3, 00:10:15.201 "base_bdevs_list": [ 00:10:15.201 { 00:10:15.201 "name": "BaseBdev1", 00:10:15.201 "uuid": "7250b2f5-a98e-4445-984f-117655ff8c4a", 00:10:15.201 "is_configured": true, 00:10:15.201 "data_offset": 2048, 00:10:15.201 "data_size": 63488 00:10:15.201 }, 00:10:15.201 { 00:10:15.201 "name": "BaseBdev2", 00:10:15.201 "uuid": "a654870a-d11d-4f9d-a3b5-4ad86d150763", 00:10:15.201 "is_configured": true, 00:10:15.201 "data_offset": 2048, 00:10:15.201 "data_size": 63488 00:10:15.201 }, 00:10:15.201 { 00:10:15.201 "name": "BaseBdev3", 00:10:15.201 "uuid": "99a3d609-a69a-4fda-b068-76b9c18bffb7", 00:10:15.201 "is_configured": true, 00:10:15.201 "data_offset": 2048, 00:10:15.201 "data_size": 63488 00:10:15.201 } 00:10:15.201 ] 00:10:15.201 }' 00:10:15.201 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.201 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.461 [2024-11-05 11:26:14.619173] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.461 "name": "Existed_Raid", 00:10:15.461 "aliases": [ 00:10:15.461 "2bfedba9-2fcb-4b7a-98a1-fe8d1d3b898a" 00:10:15.461 ], 00:10:15.461 "product_name": "Raid Volume", 00:10:15.461 "block_size": 512, 00:10:15.461 "num_blocks": 190464, 00:10:15.461 "uuid": "2bfedba9-2fcb-4b7a-98a1-fe8d1d3b898a", 00:10:15.461 "assigned_rate_limits": { 00:10:15.461 "rw_ios_per_sec": 0, 00:10:15.461 "rw_mbytes_per_sec": 0, 00:10:15.461 "r_mbytes_per_sec": 0, 00:10:15.461 "w_mbytes_per_sec": 0 00:10:15.461 }, 00:10:15.461 "claimed": false, 00:10:15.461 "zoned": false, 00:10:15.461 "supported_io_types": { 00:10:15.461 "read": true, 00:10:15.461 "write": true, 00:10:15.461 "unmap": true, 00:10:15.461 "flush": true, 00:10:15.461 "reset": true, 00:10:15.461 "nvme_admin": false, 00:10:15.461 "nvme_io": false, 00:10:15.461 "nvme_io_md": false, 00:10:15.461 "write_zeroes": true, 00:10:15.461 "zcopy": false, 00:10:15.461 "get_zone_info": false, 00:10:15.461 "zone_management": false, 00:10:15.461 "zone_append": false, 00:10:15.461 "compare": false, 00:10:15.461 "compare_and_write": false, 00:10:15.461 "abort": false, 00:10:15.461 "seek_hole": false, 00:10:15.461 "seek_data": false, 00:10:15.461 "copy": false, 00:10:15.461 "nvme_iov_md": false 00:10:15.461 }, 00:10:15.461 "memory_domains": [ 00:10:15.461 { 00:10:15.461 "dma_device_id": "system", 00:10:15.461 "dma_device_type": 1 00:10:15.461 }, 00:10:15.461 { 00:10:15.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.461 "dma_device_type": 2 00:10:15.461 }, 00:10:15.461 { 00:10:15.461 "dma_device_id": "system", 00:10:15.461 "dma_device_type": 1 00:10:15.461 }, 00:10:15.461 { 00:10:15.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.461 "dma_device_type": 2 00:10:15.461 }, 00:10:15.461 { 00:10:15.461 "dma_device_id": "system", 00:10:15.461 "dma_device_type": 1 00:10:15.461 }, 00:10:15.461 { 00:10:15.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.461 "dma_device_type": 2 00:10:15.461 } 00:10:15.461 ], 00:10:15.461 "driver_specific": { 00:10:15.461 "raid": { 00:10:15.461 "uuid": "2bfedba9-2fcb-4b7a-98a1-fe8d1d3b898a", 00:10:15.461 "strip_size_kb": 64, 00:10:15.461 "state": "online", 00:10:15.461 "raid_level": "raid0", 00:10:15.461 "superblock": true, 00:10:15.461 "num_base_bdevs": 3, 00:10:15.461 "num_base_bdevs_discovered": 3, 00:10:15.461 "num_base_bdevs_operational": 3, 00:10:15.461 "base_bdevs_list": [ 00:10:15.461 { 00:10:15.461 "name": "BaseBdev1", 00:10:15.461 "uuid": "7250b2f5-a98e-4445-984f-117655ff8c4a", 00:10:15.461 "is_configured": true, 00:10:15.461 "data_offset": 2048, 00:10:15.461 "data_size": 63488 00:10:15.461 }, 00:10:15.461 { 00:10:15.461 "name": "BaseBdev2", 00:10:15.461 "uuid": "a654870a-d11d-4f9d-a3b5-4ad86d150763", 00:10:15.461 "is_configured": true, 00:10:15.461 "data_offset": 2048, 00:10:15.461 "data_size": 63488 00:10:15.461 }, 00:10:15.461 { 00:10:15.461 "name": "BaseBdev3", 00:10:15.461 "uuid": "99a3d609-a69a-4fda-b068-76b9c18bffb7", 00:10:15.461 "is_configured": true, 00:10:15.461 "data_offset": 2048, 00:10:15.461 "data_size": 63488 00:10:15.461 } 00:10:15.461 ] 00:10:15.461 } 00:10:15.461 } 00:10:15.461 }' 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:15.461 BaseBdev2 00:10:15.461 BaseBdev3' 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.461 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.721 [2024-11-05 11:26:14.890436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:15.721 [2024-11-05 11:26:14.890470] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.721 [2024-11-05 11:26:14.890526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.721 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.981 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.981 11:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.981 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.981 11:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.981 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.981 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.981 "name": "Existed_Raid", 00:10:15.981 "uuid": "2bfedba9-2fcb-4b7a-98a1-fe8d1d3b898a", 00:10:15.981 "strip_size_kb": 64, 00:10:15.981 "state": "offline", 00:10:15.981 "raid_level": "raid0", 00:10:15.981 "superblock": true, 00:10:15.981 "num_base_bdevs": 3, 00:10:15.981 "num_base_bdevs_discovered": 2, 00:10:15.981 "num_base_bdevs_operational": 2, 00:10:15.981 "base_bdevs_list": [ 00:10:15.981 { 00:10:15.981 "name": null, 00:10:15.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.981 "is_configured": false, 00:10:15.981 "data_offset": 0, 00:10:15.981 "data_size": 63488 00:10:15.981 }, 00:10:15.981 { 00:10:15.981 "name": "BaseBdev2", 00:10:15.981 "uuid": "a654870a-d11d-4f9d-a3b5-4ad86d150763", 00:10:15.981 "is_configured": true, 00:10:15.981 "data_offset": 2048, 00:10:15.981 "data_size": 63488 00:10:15.981 }, 00:10:15.981 { 00:10:15.981 "name": "BaseBdev3", 00:10:15.981 "uuid": "99a3d609-a69a-4fda-b068-76b9c18bffb7", 00:10:15.981 "is_configured": true, 00:10:15.981 "data_offset": 2048, 00:10:15.981 "data_size": 63488 00:10:15.981 } 00:10:15.981 ] 00:10:15.981 }' 00:10:15.981 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.981 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.240 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:16.240 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.240 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.240 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.240 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.240 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.240 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.240 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.240 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.240 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:16.240 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.240 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.240 [2024-11-05 11:26:15.498295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.499 [2024-11-05 11:26:15.651771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.499 [2024-11-05 11:26:15.651901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.499 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.759 BaseBdev2 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.759 [ 00:10:16.759 { 00:10:16.759 "name": "BaseBdev2", 00:10:16.759 "aliases": [ 00:10:16.759 "60fc6333-68f6-4af2-b4f6-5a03d8c3439e" 00:10:16.759 ], 00:10:16.759 "product_name": "Malloc disk", 00:10:16.759 "block_size": 512, 00:10:16.759 "num_blocks": 65536, 00:10:16.759 "uuid": "60fc6333-68f6-4af2-b4f6-5a03d8c3439e", 00:10:16.759 "assigned_rate_limits": { 00:10:16.759 "rw_ios_per_sec": 0, 00:10:16.759 "rw_mbytes_per_sec": 0, 00:10:16.759 "r_mbytes_per_sec": 0, 00:10:16.759 "w_mbytes_per_sec": 0 00:10:16.759 }, 00:10:16.759 "claimed": false, 00:10:16.759 "zoned": false, 00:10:16.759 "supported_io_types": { 00:10:16.759 "read": true, 00:10:16.759 "write": true, 00:10:16.759 "unmap": true, 00:10:16.759 "flush": true, 00:10:16.759 "reset": true, 00:10:16.759 "nvme_admin": false, 00:10:16.759 "nvme_io": false, 00:10:16.759 "nvme_io_md": false, 00:10:16.759 "write_zeroes": true, 00:10:16.759 "zcopy": true, 00:10:16.759 "get_zone_info": false, 00:10:16.759 "zone_management": false, 00:10:16.759 "zone_append": false, 00:10:16.759 "compare": false, 00:10:16.759 "compare_and_write": false, 00:10:16.759 "abort": true, 00:10:16.759 "seek_hole": false, 00:10:16.759 "seek_data": false, 00:10:16.759 "copy": true, 00:10:16.759 "nvme_iov_md": false 00:10:16.759 }, 00:10:16.759 "memory_domains": [ 00:10:16.759 { 00:10:16.759 "dma_device_id": "system", 00:10:16.759 "dma_device_type": 1 00:10:16.759 }, 00:10:16.759 { 00:10:16.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.759 "dma_device_type": 2 00:10:16.759 } 00:10:16.759 ], 00:10:16.759 "driver_specific": {} 00:10:16.759 } 00:10:16.759 ] 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.759 BaseBdev3 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.759 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.759 [ 00:10:16.759 { 00:10:16.759 "name": "BaseBdev3", 00:10:16.759 "aliases": [ 00:10:16.759 "b79517ca-c812-46a2-a678-74324a02f5e0" 00:10:16.759 ], 00:10:16.759 "product_name": "Malloc disk", 00:10:16.759 "block_size": 512, 00:10:16.759 "num_blocks": 65536, 00:10:16.759 "uuid": "b79517ca-c812-46a2-a678-74324a02f5e0", 00:10:16.760 "assigned_rate_limits": { 00:10:16.760 "rw_ios_per_sec": 0, 00:10:16.760 "rw_mbytes_per_sec": 0, 00:10:16.760 "r_mbytes_per_sec": 0, 00:10:16.760 "w_mbytes_per_sec": 0 00:10:16.760 }, 00:10:16.760 "claimed": false, 00:10:16.760 "zoned": false, 00:10:16.760 "supported_io_types": { 00:10:16.760 "read": true, 00:10:16.760 "write": true, 00:10:16.760 "unmap": true, 00:10:16.760 "flush": true, 00:10:16.760 "reset": true, 00:10:16.760 "nvme_admin": false, 00:10:16.760 "nvme_io": false, 00:10:16.760 "nvme_io_md": false, 00:10:16.760 "write_zeroes": true, 00:10:16.760 "zcopy": true, 00:10:16.760 "get_zone_info": false, 00:10:16.760 "zone_management": false, 00:10:16.760 "zone_append": false, 00:10:16.760 "compare": false, 00:10:16.760 "compare_and_write": false, 00:10:16.760 "abort": true, 00:10:16.760 "seek_hole": false, 00:10:16.760 "seek_data": false, 00:10:16.760 "copy": true, 00:10:16.760 "nvme_iov_md": false 00:10:16.760 }, 00:10:16.760 "memory_domains": [ 00:10:16.760 { 00:10:16.760 "dma_device_id": "system", 00:10:16.760 "dma_device_type": 1 00:10:16.760 }, 00:10:16.760 { 00:10:16.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.760 "dma_device_type": 2 00:10:16.760 } 00:10:16.760 ], 00:10:16.760 "driver_specific": {} 00:10:16.760 } 00:10:16.760 ] 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.760 [2024-11-05 11:26:15.958135] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.760 [2024-11-05 11:26:15.958204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.760 [2024-11-05 11:26:15.958225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.760 [2024-11-05 11:26:15.960127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.760 11:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.760 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.760 "name": "Existed_Raid", 00:10:16.760 "uuid": "79be49d9-54e7-4954-b602-87ee24fa41d1", 00:10:16.760 "strip_size_kb": 64, 00:10:16.760 "state": "configuring", 00:10:16.760 "raid_level": "raid0", 00:10:16.760 "superblock": true, 00:10:16.760 "num_base_bdevs": 3, 00:10:16.760 "num_base_bdevs_discovered": 2, 00:10:16.760 "num_base_bdevs_operational": 3, 00:10:16.760 "base_bdevs_list": [ 00:10:16.760 { 00:10:16.760 "name": "BaseBdev1", 00:10:16.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.760 "is_configured": false, 00:10:16.760 "data_offset": 0, 00:10:16.760 "data_size": 0 00:10:16.760 }, 00:10:16.760 { 00:10:16.760 "name": "BaseBdev2", 00:10:16.760 "uuid": "60fc6333-68f6-4af2-b4f6-5a03d8c3439e", 00:10:16.760 "is_configured": true, 00:10:16.760 "data_offset": 2048, 00:10:16.760 "data_size": 63488 00:10:16.760 }, 00:10:16.760 { 00:10:16.760 "name": "BaseBdev3", 00:10:16.760 "uuid": "b79517ca-c812-46a2-a678-74324a02f5e0", 00:10:16.760 "is_configured": true, 00:10:16.760 "data_offset": 2048, 00:10:16.760 "data_size": 63488 00:10:16.760 } 00:10:16.760 ] 00:10:16.760 }' 00:10:16.760 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.760 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.329 [2024-11-05 11:26:16.393396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.329 "name": "Existed_Raid", 00:10:17.329 "uuid": "79be49d9-54e7-4954-b602-87ee24fa41d1", 00:10:17.329 "strip_size_kb": 64, 00:10:17.329 "state": "configuring", 00:10:17.329 "raid_level": "raid0", 00:10:17.329 "superblock": true, 00:10:17.329 "num_base_bdevs": 3, 00:10:17.329 "num_base_bdevs_discovered": 1, 00:10:17.329 "num_base_bdevs_operational": 3, 00:10:17.329 "base_bdevs_list": [ 00:10:17.329 { 00:10:17.329 "name": "BaseBdev1", 00:10:17.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.329 "is_configured": false, 00:10:17.329 "data_offset": 0, 00:10:17.329 "data_size": 0 00:10:17.329 }, 00:10:17.329 { 00:10:17.329 "name": null, 00:10:17.329 "uuid": "60fc6333-68f6-4af2-b4f6-5a03d8c3439e", 00:10:17.329 "is_configured": false, 00:10:17.329 "data_offset": 0, 00:10:17.329 "data_size": 63488 00:10:17.329 }, 00:10:17.329 { 00:10:17.329 "name": "BaseBdev3", 00:10:17.329 "uuid": "b79517ca-c812-46a2-a678-74324a02f5e0", 00:10:17.329 "is_configured": true, 00:10:17.329 "data_offset": 2048, 00:10:17.329 "data_size": 63488 00:10:17.329 } 00:10:17.329 ] 00:10:17.329 }' 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.329 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.588 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.588 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.588 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.847 [2024-11-05 11:26:16.945328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.847 BaseBdev1 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.847 [ 00:10:17.847 { 00:10:17.847 "name": "BaseBdev1", 00:10:17.847 "aliases": [ 00:10:17.847 "c3b090fd-2013-4857-9a22-2282e66c402d" 00:10:17.847 ], 00:10:17.847 "product_name": "Malloc disk", 00:10:17.847 "block_size": 512, 00:10:17.847 "num_blocks": 65536, 00:10:17.847 "uuid": "c3b090fd-2013-4857-9a22-2282e66c402d", 00:10:17.847 "assigned_rate_limits": { 00:10:17.847 "rw_ios_per_sec": 0, 00:10:17.847 "rw_mbytes_per_sec": 0, 00:10:17.847 "r_mbytes_per_sec": 0, 00:10:17.847 "w_mbytes_per_sec": 0 00:10:17.847 }, 00:10:17.847 "claimed": true, 00:10:17.847 "claim_type": "exclusive_write", 00:10:17.847 "zoned": false, 00:10:17.847 "supported_io_types": { 00:10:17.847 "read": true, 00:10:17.847 "write": true, 00:10:17.847 "unmap": true, 00:10:17.847 "flush": true, 00:10:17.847 "reset": true, 00:10:17.847 "nvme_admin": false, 00:10:17.847 "nvme_io": false, 00:10:17.847 "nvme_io_md": false, 00:10:17.847 "write_zeroes": true, 00:10:17.847 "zcopy": true, 00:10:17.847 "get_zone_info": false, 00:10:17.847 "zone_management": false, 00:10:17.847 "zone_append": false, 00:10:17.847 "compare": false, 00:10:17.847 "compare_and_write": false, 00:10:17.847 "abort": true, 00:10:17.847 "seek_hole": false, 00:10:17.847 "seek_data": false, 00:10:17.847 "copy": true, 00:10:17.847 "nvme_iov_md": false 00:10:17.847 }, 00:10:17.847 "memory_domains": [ 00:10:17.847 { 00:10:17.847 "dma_device_id": "system", 00:10:17.847 "dma_device_type": 1 00:10:17.847 }, 00:10:17.847 { 00:10:17.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.847 "dma_device_type": 2 00:10:17.847 } 00:10:17.847 ], 00:10:17.847 "driver_specific": {} 00:10:17.847 } 00:10:17.847 ] 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.847 11:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.847 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.847 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.847 "name": "Existed_Raid", 00:10:17.847 "uuid": "79be49d9-54e7-4954-b602-87ee24fa41d1", 00:10:17.847 "strip_size_kb": 64, 00:10:17.847 "state": "configuring", 00:10:17.847 "raid_level": "raid0", 00:10:17.847 "superblock": true, 00:10:17.847 "num_base_bdevs": 3, 00:10:17.847 "num_base_bdevs_discovered": 2, 00:10:17.847 "num_base_bdevs_operational": 3, 00:10:17.847 "base_bdevs_list": [ 00:10:17.847 { 00:10:17.847 "name": "BaseBdev1", 00:10:17.847 "uuid": "c3b090fd-2013-4857-9a22-2282e66c402d", 00:10:17.847 "is_configured": true, 00:10:17.847 "data_offset": 2048, 00:10:17.847 "data_size": 63488 00:10:17.847 }, 00:10:17.847 { 00:10:17.847 "name": null, 00:10:17.847 "uuid": "60fc6333-68f6-4af2-b4f6-5a03d8c3439e", 00:10:17.847 "is_configured": false, 00:10:17.847 "data_offset": 0, 00:10:17.847 "data_size": 63488 00:10:17.847 }, 00:10:17.847 { 00:10:17.847 "name": "BaseBdev3", 00:10:17.847 "uuid": "b79517ca-c812-46a2-a678-74324a02f5e0", 00:10:17.847 "is_configured": true, 00:10:17.847 "data_offset": 2048, 00:10:17.847 "data_size": 63488 00:10:17.847 } 00:10:17.848 ] 00:10:17.848 }' 00:10:17.848 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.848 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.416 [2024-11-05 11:26:17.492432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.416 "name": "Existed_Raid", 00:10:18.416 "uuid": "79be49d9-54e7-4954-b602-87ee24fa41d1", 00:10:18.416 "strip_size_kb": 64, 00:10:18.416 "state": "configuring", 00:10:18.416 "raid_level": "raid0", 00:10:18.416 "superblock": true, 00:10:18.416 "num_base_bdevs": 3, 00:10:18.416 "num_base_bdevs_discovered": 1, 00:10:18.416 "num_base_bdevs_operational": 3, 00:10:18.416 "base_bdevs_list": [ 00:10:18.416 { 00:10:18.416 "name": "BaseBdev1", 00:10:18.416 "uuid": "c3b090fd-2013-4857-9a22-2282e66c402d", 00:10:18.416 "is_configured": true, 00:10:18.416 "data_offset": 2048, 00:10:18.416 "data_size": 63488 00:10:18.416 }, 00:10:18.416 { 00:10:18.416 "name": null, 00:10:18.416 "uuid": "60fc6333-68f6-4af2-b4f6-5a03d8c3439e", 00:10:18.416 "is_configured": false, 00:10:18.416 "data_offset": 0, 00:10:18.416 "data_size": 63488 00:10:18.416 }, 00:10:18.416 { 00:10:18.416 "name": null, 00:10:18.416 "uuid": "b79517ca-c812-46a2-a678-74324a02f5e0", 00:10:18.416 "is_configured": false, 00:10:18.416 "data_offset": 0, 00:10:18.416 "data_size": 63488 00:10:18.416 } 00:10:18.416 ] 00:10:18.416 }' 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.416 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.984 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.984 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.984 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.984 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:18.984 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.984 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:18.984 11:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:18.984 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.984 11:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.984 [2024-11-05 11:26:18.003712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.984 "name": "Existed_Raid", 00:10:18.984 "uuid": "79be49d9-54e7-4954-b602-87ee24fa41d1", 00:10:18.984 "strip_size_kb": 64, 00:10:18.984 "state": "configuring", 00:10:18.984 "raid_level": "raid0", 00:10:18.984 "superblock": true, 00:10:18.984 "num_base_bdevs": 3, 00:10:18.984 "num_base_bdevs_discovered": 2, 00:10:18.984 "num_base_bdevs_operational": 3, 00:10:18.984 "base_bdevs_list": [ 00:10:18.984 { 00:10:18.984 "name": "BaseBdev1", 00:10:18.984 "uuid": "c3b090fd-2013-4857-9a22-2282e66c402d", 00:10:18.984 "is_configured": true, 00:10:18.984 "data_offset": 2048, 00:10:18.984 "data_size": 63488 00:10:18.984 }, 00:10:18.984 { 00:10:18.984 "name": null, 00:10:18.984 "uuid": "60fc6333-68f6-4af2-b4f6-5a03d8c3439e", 00:10:18.984 "is_configured": false, 00:10:18.984 "data_offset": 0, 00:10:18.984 "data_size": 63488 00:10:18.984 }, 00:10:18.984 { 00:10:18.984 "name": "BaseBdev3", 00:10:18.984 "uuid": "b79517ca-c812-46a2-a678-74324a02f5e0", 00:10:18.984 "is_configured": true, 00:10:18.984 "data_offset": 2048, 00:10:18.984 "data_size": 63488 00:10:18.984 } 00:10:18.984 ] 00:10:18.984 }' 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.984 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.243 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:19.243 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.243 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.243 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.243 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.243 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:19.243 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:19.243 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.243 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.243 [2024-11-05 11:26:18.455034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.503 "name": "Existed_Raid", 00:10:19.503 "uuid": "79be49d9-54e7-4954-b602-87ee24fa41d1", 00:10:19.503 "strip_size_kb": 64, 00:10:19.503 "state": "configuring", 00:10:19.503 "raid_level": "raid0", 00:10:19.503 "superblock": true, 00:10:19.503 "num_base_bdevs": 3, 00:10:19.503 "num_base_bdevs_discovered": 1, 00:10:19.503 "num_base_bdevs_operational": 3, 00:10:19.503 "base_bdevs_list": [ 00:10:19.503 { 00:10:19.503 "name": null, 00:10:19.503 "uuid": "c3b090fd-2013-4857-9a22-2282e66c402d", 00:10:19.503 "is_configured": false, 00:10:19.503 "data_offset": 0, 00:10:19.503 "data_size": 63488 00:10:19.503 }, 00:10:19.503 { 00:10:19.503 "name": null, 00:10:19.503 "uuid": "60fc6333-68f6-4af2-b4f6-5a03d8c3439e", 00:10:19.503 "is_configured": false, 00:10:19.503 "data_offset": 0, 00:10:19.503 "data_size": 63488 00:10:19.503 }, 00:10:19.503 { 00:10:19.503 "name": "BaseBdev3", 00:10:19.503 "uuid": "b79517ca-c812-46a2-a678-74324a02f5e0", 00:10:19.503 "is_configured": true, 00:10:19.503 "data_offset": 2048, 00:10:19.503 "data_size": 63488 00:10:19.503 } 00:10:19.503 ] 00:10:19.503 }' 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.503 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.762 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.762 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.762 11:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.762 11:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:19.762 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.020 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:20.020 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:20.020 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.020 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.020 [2024-11-05 11:26:19.052107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.020 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.020 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:20.020 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.020 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.020 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.020 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.020 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.021 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.021 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.021 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.021 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.021 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.021 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.021 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.021 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.021 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.021 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.021 "name": "Existed_Raid", 00:10:20.021 "uuid": "79be49d9-54e7-4954-b602-87ee24fa41d1", 00:10:20.021 "strip_size_kb": 64, 00:10:20.021 "state": "configuring", 00:10:20.021 "raid_level": "raid0", 00:10:20.021 "superblock": true, 00:10:20.021 "num_base_bdevs": 3, 00:10:20.021 "num_base_bdevs_discovered": 2, 00:10:20.021 "num_base_bdevs_operational": 3, 00:10:20.021 "base_bdevs_list": [ 00:10:20.021 { 00:10:20.021 "name": null, 00:10:20.021 "uuid": "c3b090fd-2013-4857-9a22-2282e66c402d", 00:10:20.021 "is_configured": false, 00:10:20.021 "data_offset": 0, 00:10:20.021 "data_size": 63488 00:10:20.021 }, 00:10:20.021 { 00:10:20.021 "name": "BaseBdev2", 00:10:20.021 "uuid": "60fc6333-68f6-4af2-b4f6-5a03d8c3439e", 00:10:20.021 "is_configured": true, 00:10:20.021 "data_offset": 2048, 00:10:20.021 "data_size": 63488 00:10:20.021 }, 00:10:20.021 { 00:10:20.021 "name": "BaseBdev3", 00:10:20.021 "uuid": "b79517ca-c812-46a2-a678-74324a02f5e0", 00:10:20.021 "is_configured": true, 00:10:20.021 "data_offset": 2048, 00:10:20.021 "data_size": 63488 00:10:20.021 } 00:10:20.021 ] 00:10:20.021 }' 00:10:20.021 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.021 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.280 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.280 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.280 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:20.280 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.280 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c3b090fd-2013-4857-9a22-2282e66c402d 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.538 [2024-11-05 11:26:19.663849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:20.538 [2024-11-05 11:26:19.664147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:20.538 [2024-11-05 11:26:19.664171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:20.538 [2024-11-05 11:26:19.664425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:20.538 [2024-11-05 11:26:19.664570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:20.538 [2024-11-05 11:26:19.664580] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:20.538 NewBaseBdev 00:10:20.538 [2024-11-05 11:26:19.664748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.538 [ 00:10:20.538 { 00:10:20.538 "name": "NewBaseBdev", 00:10:20.538 "aliases": [ 00:10:20.538 "c3b090fd-2013-4857-9a22-2282e66c402d" 00:10:20.538 ], 00:10:20.538 "product_name": "Malloc disk", 00:10:20.538 "block_size": 512, 00:10:20.538 "num_blocks": 65536, 00:10:20.538 "uuid": "c3b090fd-2013-4857-9a22-2282e66c402d", 00:10:20.538 "assigned_rate_limits": { 00:10:20.538 "rw_ios_per_sec": 0, 00:10:20.538 "rw_mbytes_per_sec": 0, 00:10:20.538 "r_mbytes_per_sec": 0, 00:10:20.538 "w_mbytes_per_sec": 0 00:10:20.538 }, 00:10:20.538 "claimed": true, 00:10:20.538 "claim_type": "exclusive_write", 00:10:20.538 "zoned": false, 00:10:20.538 "supported_io_types": { 00:10:20.538 "read": true, 00:10:20.538 "write": true, 00:10:20.538 "unmap": true, 00:10:20.538 "flush": true, 00:10:20.538 "reset": true, 00:10:20.538 "nvme_admin": false, 00:10:20.538 "nvme_io": false, 00:10:20.538 "nvme_io_md": false, 00:10:20.538 "write_zeroes": true, 00:10:20.538 "zcopy": true, 00:10:20.538 "get_zone_info": false, 00:10:20.538 "zone_management": false, 00:10:20.538 "zone_append": false, 00:10:20.538 "compare": false, 00:10:20.538 "compare_and_write": false, 00:10:20.538 "abort": true, 00:10:20.538 "seek_hole": false, 00:10:20.538 "seek_data": false, 00:10:20.538 "copy": true, 00:10:20.538 "nvme_iov_md": false 00:10:20.538 }, 00:10:20.538 "memory_domains": [ 00:10:20.538 { 00:10:20.538 "dma_device_id": "system", 00:10:20.538 "dma_device_type": 1 00:10:20.538 }, 00:10:20.538 { 00:10:20.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.538 "dma_device_type": 2 00:10:20.538 } 00:10:20.538 ], 00:10:20.538 "driver_specific": {} 00:10:20.538 } 00:10:20.538 ] 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.538 "name": "Existed_Raid", 00:10:20.538 "uuid": "79be49d9-54e7-4954-b602-87ee24fa41d1", 00:10:20.538 "strip_size_kb": 64, 00:10:20.538 "state": "online", 00:10:20.538 "raid_level": "raid0", 00:10:20.538 "superblock": true, 00:10:20.538 "num_base_bdevs": 3, 00:10:20.538 "num_base_bdevs_discovered": 3, 00:10:20.538 "num_base_bdevs_operational": 3, 00:10:20.538 "base_bdevs_list": [ 00:10:20.538 { 00:10:20.538 "name": "NewBaseBdev", 00:10:20.538 "uuid": "c3b090fd-2013-4857-9a22-2282e66c402d", 00:10:20.538 "is_configured": true, 00:10:20.538 "data_offset": 2048, 00:10:20.538 "data_size": 63488 00:10:20.538 }, 00:10:20.538 { 00:10:20.538 "name": "BaseBdev2", 00:10:20.538 "uuid": "60fc6333-68f6-4af2-b4f6-5a03d8c3439e", 00:10:20.538 "is_configured": true, 00:10:20.538 "data_offset": 2048, 00:10:20.538 "data_size": 63488 00:10:20.538 }, 00:10:20.538 { 00:10:20.538 "name": "BaseBdev3", 00:10:20.538 "uuid": "b79517ca-c812-46a2-a678-74324a02f5e0", 00:10:20.538 "is_configured": true, 00:10:20.538 "data_offset": 2048, 00:10:20.538 "data_size": 63488 00:10:20.538 } 00:10:20.538 ] 00:10:20.538 }' 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.538 11:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.106 [2024-11-05 11:26:20.183420] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.106 "name": "Existed_Raid", 00:10:21.106 "aliases": [ 00:10:21.106 "79be49d9-54e7-4954-b602-87ee24fa41d1" 00:10:21.106 ], 00:10:21.106 "product_name": "Raid Volume", 00:10:21.106 "block_size": 512, 00:10:21.106 "num_blocks": 190464, 00:10:21.106 "uuid": "79be49d9-54e7-4954-b602-87ee24fa41d1", 00:10:21.106 "assigned_rate_limits": { 00:10:21.106 "rw_ios_per_sec": 0, 00:10:21.106 "rw_mbytes_per_sec": 0, 00:10:21.106 "r_mbytes_per_sec": 0, 00:10:21.106 "w_mbytes_per_sec": 0 00:10:21.106 }, 00:10:21.106 "claimed": false, 00:10:21.106 "zoned": false, 00:10:21.106 "supported_io_types": { 00:10:21.106 "read": true, 00:10:21.106 "write": true, 00:10:21.106 "unmap": true, 00:10:21.106 "flush": true, 00:10:21.106 "reset": true, 00:10:21.106 "nvme_admin": false, 00:10:21.106 "nvme_io": false, 00:10:21.106 "nvme_io_md": false, 00:10:21.106 "write_zeroes": true, 00:10:21.106 "zcopy": false, 00:10:21.106 "get_zone_info": false, 00:10:21.106 "zone_management": false, 00:10:21.106 "zone_append": false, 00:10:21.106 "compare": false, 00:10:21.106 "compare_and_write": false, 00:10:21.106 "abort": false, 00:10:21.106 "seek_hole": false, 00:10:21.106 "seek_data": false, 00:10:21.106 "copy": false, 00:10:21.106 "nvme_iov_md": false 00:10:21.106 }, 00:10:21.106 "memory_domains": [ 00:10:21.106 { 00:10:21.106 "dma_device_id": "system", 00:10:21.106 "dma_device_type": 1 00:10:21.106 }, 00:10:21.106 { 00:10:21.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.106 "dma_device_type": 2 00:10:21.106 }, 00:10:21.106 { 00:10:21.106 "dma_device_id": "system", 00:10:21.106 "dma_device_type": 1 00:10:21.106 }, 00:10:21.106 { 00:10:21.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.106 "dma_device_type": 2 00:10:21.106 }, 00:10:21.106 { 00:10:21.106 "dma_device_id": "system", 00:10:21.106 "dma_device_type": 1 00:10:21.106 }, 00:10:21.106 { 00:10:21.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.106 "dma_device_type": 2 00:10:21.106 } 00:10:21.106 ], 00:10:21.106 "driver_specific": { 00:10:21.106 "raid": { 00:10:21.106 "uuid": "79be49d9-54e7-4954-b602-87ee24fa41d1", 00:10:21.106 "strip_size_kb": 64, 00:10:21.106 "state": "online", 00:10:21.106 "raid_level": "raid0", 00:10:21.106 "superblock": true, 00:10:21.106 "num_base_bdevs": 3, 00:10:21.106 "num_base_bdevs_discovered": 3, 00:10:21.106 "num_base_bdevs_operational": 3, 00:10:21.106 "base_bdevs_list": [ 00:10:21.106 { 00:10:21.106 "name": "NewBaseBdev", 00:10:21.106 "uuid": "c3b090fd-2013-4857-9a22-2282e66c402d", 00:10:21.106 "is_configured": true, 00:10:21.106 "data_offset": 2048, 00:10:21.106 "data_size": 63488 00:10:21.106 }, 00:10:21.106 { 00:10:21.106 "name": "BaseBdev2", 00:10:21.106 "uuid": "60fc6333-68f6-4af2-b4f6-5a03d8c3439e", 00:10:21.106 "is_configured": true, 00:10:21.106 "data_offset": 2048, 00:10:21.106 "data_size": 63488 00:10:21.106 }, 00:10:21.106 { 00:10:21.106 "name": "BaseBdev3", 00:10:21.106 "uuid": "b79517ca-c812-46a2-a678-74324a02f5e0", 00:10:21.106 "is_configured": true, 00:10:21.106 "data_offset": 2048, 00:10:21.106 "data_size": 63488 00:10:21.106 } 00:10:21.106 ] 00:10:21.106 } 00:10:21.106 } 00:10:21.106 }' 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:21.106 BaseBdev2 00:10:21.106 BaseBdev3' 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.106 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:21.107 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.107 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.370 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.370 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.370 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.370 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.370 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.370 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:21.370 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.370 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.370 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.370 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.370 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.370 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.370 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.370 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.370 [2024-11-05 11:26:20.462567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.370 [2024-11-05 11:26:20.462639] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.370 [2024-11-05 11:26:20.462754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.370 [2024-11-05 11:26:20.462827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.371 [2024-11-05 11:26:20.462864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:21.371 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.371 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64570 00:10:21.371 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64570 ']' 00:10:21.371 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64570 00:10:21.371 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:21.371 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:21.371 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64570 00:10:21.371 killing process with pid 64570 00:10:21.371 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:21.371 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:21.371 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64570' 00:10:21.371 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64570 00:10:21.371 [2024-11-05 11:26:20.511801] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.371 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64570 00:10:21.639 [2024-11-05 11:26:20.811881] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.014 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:23.014 00:10:23.014 real 0m10.731s 00:10:23.014 user 0m17.091s 00:10:23.014 sys 0m1.951s 00:10:23.014 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:23.014 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.014 ************************************ 00:10:23.014 END TEST raid_state_function_test_sb 00:10:23.014 ************************************ 00:10:23.014 11:26:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:23.014 11:26:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:23.014 11:26:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:23.014 11:26:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.014 ************************************ 00:10:23.014 START TEST raid_superblock_test 00:10:23.014 ************************************ 00:10:23.014 11:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65196 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65196 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 65196 ']' 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:23.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:23.014 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.014 [2024-11-05 11:26:22.091623] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:10:23.014 [2024-11-05 11:26:22.091759] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65196 ] 00:10:23.014 [2024-11-05 11:26:22.262736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.273 [2024-11-05 11:26:22.378918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.531 [2024-11-05 11:26:22.574267] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.531 [2024-11-05 11:26:22.574419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.789 malloc1 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.789 [2024-11-05 11:26:22.973034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:23.789 [2024-11-05 11:26:22.973176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.789 [2024-11-05 11:26:22.973222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:23.789 [2024-11-05 11:26:22.973254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.789 [2024-11-05 11:26:22.975382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.789 [2024-11-05 11:26:22.975453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:23.789 pt1 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.789 11:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.789 malloc2 00:10:23.789 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.789 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:23.789 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.789 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.789 [2024-11-05 11:26:23.033048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:23.789 [2024-11-05 11:26:23.033111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.789 [2024-11-05 11:26:23.033163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:23.789 [2024-11-05 11:26:23.033175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.789 [2024-11-05 11:26:23.035402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.789 [2024-11-05 11:26:23.035442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:23.789 pt2 00:10:23.789 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.789 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:23.790 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.790 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:23.790 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:23.790 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:23.790 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:23.790 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:23.790 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:23.790 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:23.790 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.790 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.049 malloc3 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.049 [2024-11-05 11:26:23.098709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:24.049 [2024-11-05 11:26:23.098822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.049 [2024-11-05 11:26:23.098860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:24.049 [2024-11-05 11:26:23.098890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.049 [2024-11-05 11:26:23.100986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.049 [2024-11-05 11:26:23.101059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:24.049 pt3 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.049 [2024-11-05 11:26:23.110736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:24.049 [2024-11-05 11:26:23.112515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.049 [2024-11-05 11:26:23.112617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:24.049 [2024-11-05 11:26:23.112804] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:24.049 [2024-11-05 11:26:23.112859] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:24.049 [2024-11-05 11:26:23.113116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:24.049 [2024-11-05 11:26:23.113340] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:24.049 [2024-11-05 11:26:23.113386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:24.049 [2024-11-05 11:26:23.113573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.049 "name": "raid_bdev1", 00:10:24.049 "uuid": "04ff9784-2538-429b-aadd-450ee0499559", 00:10:24.049 "strip_size_kb": 64, 00:10:24.049 "state": "online", 00:10:24.049 "raid_level": "raid0", 00:10:24.049 "superblock": true, 00:10:24.049 "num_base_bdevs": 3, 00:10:24.049 "num_base_bdevs_discovered": 3, 00:10:24.049 "num_base_bdevs_operational": 3, 00:10:24.049 "base_bdevs_list": [ 00:10:24.049 { 00:10:24.049 "name": "pt1", 00:10:24.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.049 "is_configured": true, 00:10:24.049 "data_offset": 2048, 00:10:24.049 "data_size": 63488 00:10:24.049 }, 00:10:24.049 { 00:10:24.049 "name": "pt2", 00:10:24.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.049 "is_configured": true, 00:10:24.049 "data_offset": 2048, 00:10:24.049 "data_size": 63488 00:10:24.049 }, 00:10:24.049 { 00:10:24.049 "name": "pt3", 00:10:24.049 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.049 "is_configured": true, 00:10:24.049 "data_offset": 2048, 00:10:24.049 "data_size": 63488 00:10:24.049 } 00:10:24.049 ] 00:10:24.049 }' 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.049 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.308 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:24.308 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:24.308 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.308 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.308 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.308 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.308 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:24.308 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.308 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.308 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.308 [2024-11-05 11:26:23.574372] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.566 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.566 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:24.566 "name": "raid_bdev1", 00:10:24.566 "aliases": [ 00:10:24.566 "04ff9784-2538-429b-aadd-450ee0499559" 00:10:24.566 ], 00:10:24.566 "product_name": "Raid Volume", 00:10:24.566 "block_size": 512, 00:10:24.566 "num_blocks": 190464, 00:10:24.566 "uuid": "04ff9784-2538-429b-aadd-450ee0499559", 00:10:24.566 "assigned_rate_limits": { 00:10:24.566 "rw_ios_per_sec": 0, 00:10:24.566 "rw_mbytes_per_sec": 0, 00:10:24.566 "r_mbytes_per_sec": 0, 00:10:24.566 "w_mbytes_per_sec": 0 00:10:24.566 }, 00:10:24.566 "claimed": false, 00:10:24.566 "zoned": false, 00:10:24.566 "supported_io_types": { 00:10:24.566 "read": true, 00:10:24.566 "write": true, 00:10:24.566 "unmap": true, 00:10:24.566 "flush": true, 00:10:24.566 "reset": true, 00:10:24.566 "nvme_admin": false, 00:10:24.566 "nvme_io": false, 00:10:24.566 "nvme_io_md": false, 00:10:24.566 "write_zeroes": true, 00:10:24.566 "zcopy": false, 00:10:24.566 "get_zone_info": false, 00:10:24.566 "zone_management": false, 00:10:24.566 "zone_append": false, 00:10:24.566 "compare": false, 00:10:24.566 "compare_and_write": false, 00:10:24.566 "abort": false, 00:10:24.566 "seek_hole": false, 00:10:24.566 "seek_data": false, 00:10:24.566 "copy": false, 00:10:24.566 "nvme_iov_md": false 00:10:24.566 }, 00:10:24.566 "memory_domains": [ 00:10:24.566 { 00:10:24.566 "dma_device_id": "system", 00:10:24.566 "dma_device_type": 1 00:10:24.566 }, 00:10:24.566 { 00:10:24.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.566 "dma_device_type": 2 00:10:24.566 }, 00:10:24.566 { 00:10:24.566 "dma_device_id": "system", 00:10:24.566 "dma_device_type": 1 00:10:24.566 }, 00:10:24.566 { 00:10:24.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.566 "dma_device_type": 2 00:10:24.566 }, 00:10:24.566 { 00:10:24.566 "dma_device_id": "system", 00:10:24.566 "dma_device_type": 1 00:10:24.566 }, 00:10:24.566 { 00:10:24.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.566 "dma_device_type": 2 00:10:24.566 } 00:10:24.566 ], 00:10:24.566 "driver_specific": { 00:10:24.566 "raid": { 00:10:24.566 "uuid": "04ff9784-2538-429b-aadd-450ee0499559", 00:10:24.566 "strip_size_kb": 64, 00:10:24.566 "state": "online", 00:10:24.566 "raid_level": "raid0", 00:10:24.566 "superblock": true, 00:10:24.566 "num_base_bdevs": 3, 00:10:24.566 "num_base_bdevs_discovered": 3, 00:10:24.566 "num_base_bdevs_operational": 3, 00:10:24.566 "base_bdevs_list": [ 00:10:24.566 { 00:10:24.566 "name": "pt1", 00:10:24.566 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.566 "is_configured": true, 00:10:24.566 "data_offset": 2048, 00:10:24.566 "data_size": 63488 00:10:24.566 }, 00:10:24.566 { 00:10:24.566 "name": "pt2", 00:10:24.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.566 "is_configured": true, 00:10:24.566 "data_offset": 2048, 00:10:24.566 "data_size": 63488 00:10:24.566 }, 00:10:24.566 { 00:10:24.566 "name": "pt3", 00:10:24.567 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.567 "is_configured": true, 00:10:24.567 "data_offset": 2048, 00:10:24.567 "data_size": 63488 00:10:24.567 } 00:10:24.567 ] 00:10:24.567 } 00:10:24.567 } 00:10:24.567 }' 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:24.567 pt2 00:10:24.567 pt3' 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.567 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.826 [2024-11-05 11:26:23.865764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=04ff9784-2538-429b-aadd-450ee0499559 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 04ff9784-2538-429b-aadd-450ee0499559 ']' 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.826 [2024-11-05 11:26:23.909414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.826 [2024-11-05 11:26:23.909511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.826 [2024-11-05 11:26:23.909627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.826 [2024-11-05 11:26:23.909705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.826 [2024-11-05 11:26:23.909780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.826 11:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.826 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.826 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:24.826 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:24.826 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.827 [2024-11-05 11:26:24.061253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:24.827 [2024-11-05 11:26:24.063247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:24.827 [2024-11-05 11:26:24.063352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:24.827 [2024-11-05 11:26:24.063426] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:24.827 [2024-11-05 11:26:24.063526] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:24.827 [2024-11-05 11:26:24.063609] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:24.827 [2024-11-05 11:26:24.063666] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.827 [2024-11-05 11:26:24.063705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:24.827 request: 00:10:24.827 { 00:10:24.827 "name": "raid_bdev1", 00:10:24.827 "raid_level": "raid0", 00:10:24.827 "base_bdevs": [ 00:10:24.827 "malloc1", 00:10:24.827 "malloc2", 00:10:24.827 "malloc3" 00:10:24.827 ], 00:10:24.827 "strip_size_kb": 64, 00:10:24.827 "superblock": false, 00:10:24.827 "method": "bdev_raid_create", 00:10:24.827 "req_id": 1 00:10:24.827 } 00:10:24.827 Got JSON-RPC error response 00:10:24.827 response: 00:10:24.827 { 00:10:24.827 "code": -17, 00:10:24.827 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:24.827 } 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.827 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.085 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:25.085 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:25.085 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:25.085 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.085 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.085 [2024-11-05 11:26:24.117023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:25.085 [2024-11-05 11:26:24.117120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.086 [2024-11-05 11:26:24.117168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:25.086 [2024-11-05 11:26:24.117215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.086 [2024-11-05 11:26:24.119564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.086 [2024-11-05 11:26:24.119640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:25.086 [2024-11-05 11:26:24.119748] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:25.086 [2024-11-05 11:26:24.119840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:25.086 pt1 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.086 "name": "raid_bdev1", 00:10:25.086 "uuid": "04ff9784-2538-429b-aadd-450ee0499559", 00:10:25.086 "strip_size_kb": 64, 00:10:25.086 "state": "configuring", 00:10:25.086 "raid_level": "raid0", 00:10:25.086 "superblock": true, 00:10:25.086 "num_base_bdevs": 3, 00:10:25.086 "num_base_bdevs_discovered": 1, 00:10:25.086 "num_base_bdevs_operational": 3, 00:10:25.086 "base_bdevs_list": [ 00:10:25.086 { 00:10:25.086 "name": "pt1", 00:10:25.086 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.086 "is_configured": true, 00:10:25.086 "data_offset": 2048, 00:10:25.086 "data_size": 63488 00:10:25.086 }, 00:10:25.086 { 00:10:25.086 "name": null, 00:10:25.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.086 "is_configured": false, 00:10:25.086 "data_offset": 2048, 00:10:25.086 "data_size": 63488 00:10:25.086 }, 00:10:25.086 { 00:10:25.086 "name": null, 00:10:25.086 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.086 "is_configured": false, 00:10:25.086 "data_offset": 2048, 00:10:25.086 "data_size": 63488 00:10:25.086 } 00:10:25.086 ] 00:10:25.086 }' 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.086 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 [2024-11-05 11:26:24.628187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.652 [2024-11-05 11:26:24.628325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.652 [2024-11-05 11:26:24.628354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:25.652 [2024-11-05 11:26:24.628365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.652 [2024-11-05 11:26:24.628818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.652 [2024-11-05 11:26:24.628844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.652 [2024-11-05 11:26:24.628935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:25.652 [2024-11-05 11:26:24.628962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.652 pt2 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 [2024-11-05 11:26:24.640213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.652 "name": "raid_bdev1", 00:10:25.652 "uuid": "04ff9784-2538-429b-aadd-450ee0499559", 00:10:25.652 "strip_size_kb": 64, 00:10:25.652 "state": "configuring", 00:10:25.652 "raid_level": "raid0", 00:10:25.652 "superblock": true, 00:10:25.652 "num_base_bdevs": 3, 00:10:25.652 "num_base_bdevs_discovered": 1, 00:10:25.652 "num_base_bdevs_operational": 3, 00:10:25.652 "base_bdevs_list": [ 00:10:25.652 { 00:10:25.652 "name": "pt1", 00:10:25.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.652 "is_configured": true, 00:10:25.652 "data_offset": 2048, 00:10:25.652 "data_size": 63488 00:10:25.652 }, 00:10:25.652 { 00:10:25.652 "name": null, 00:10:25.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.652 "is_configured": false, 00:10:25.652 "data_offset": 0, 00:10:25.652 "data_size": 63488 00:10:25.652 }, 00:10:25.652 { 00:10:25.652 "name": null, 00:10:25.652 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.652 "is_configured": false, 00:10:25.652 "data_offset": 2048, 00:10:25.652 "data_size": 63488 00:10:25.652 } 00:10:25.652 ] 00:10:25.652 }' 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.652 11:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.912 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:25.912 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.912 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.912 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.912 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.912 [2024-11-05 11:26:25.079390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.912 [2024-11-05 11:26:25.079547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.912 [2024-11-05 11:26:25.079585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:25.912 [2024-11-05 11:26:25.079621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.912 [2024-11-05 11:26:25.080123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.912 [2024-11-05 11:26:25.080206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.912 [2024-11-05 11:26:25.080339] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:25.912 [2024-11-05 11:26:25.080399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.912 pt2 00:10:25.912 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.912 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:25.912 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.912 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:25.912 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.912 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.912 [2024-11-05 11:26:25.091359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:25.912 [2024-11-05 11:26:25.091463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.912 [2024-11-05 11:26:25.091497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:25.913 [2024-11-05 11:26:25.091531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.913 [2024-11-05 11:26:25.091997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.913 [2024-11-05 11:26:25.092062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:25.913 [2024-11-05 11:26:25.092181] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:25.913 [2024-11-05 11:26:25.092238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:25.913 [2024-11-05 11:26:25.092411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:25.913 [2024-11-05 11:26:25.092452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:25.913 [2024-11-05 11:26:25.092732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:25.913 [2024-11-05 11:26:25.092910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:25.913 [2024-11-05 11:26:25.092950] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:25.913 [2024-11-05 11:26:25.093145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.913 pt3 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.913 "name": "raid_bdev1", 00:10:25.913 "uuid": "04ff9784-2538-429b-aadd-450ee0499559", 00:10:25.913 "strip_size_kb": 64, 00:10:25.913 "state": "online", 00:10:25.913 "raid_level": "raid0", 00:10:25.913 "superblock": true, 00:10:25.913 "num_base_bdevs": 3, 00:10:25.913 "num_base_bdevs_discovered": 3, 00:10:25.913 "num_base_bdevs_operational": 3, 00:10:25.913 "base_bdevs_list": [ 00:10:25.913 { 00:10:25.913 "name": "pt1", 00:10:25.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.913 "is_configured": true, 00:10:25.913 "data_offset": 2048, 00:10:25.913 "data_size": 63488 00:10:25.913 }, 00:10:25.913 { 00:10:25.913 "name": "pt2", 00:10:25.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.913 "is_configured": true, 00:10:25.913 "data_offset": 2048, 00:10:25.913 "data_size": 63488 00:10:25.913 }, 00:10:25.913 { 00:10:25.913 "name": "pt3", 00:10:25.913 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.913 "is_configured": true, 00:10:25.913 "data_offset": 2048, 00:10:25.913 "data_size": 63488 00:10:25.913 } 00:10:25.913 ] 00:10:25.913 }' 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.913 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.482 [2024-11-05 11:26:25.538932] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:26.482 "name": "raid_bdev1", 00:10:26.482 "aliases": [ 00:10:26.482 "04ff9784-2538-429b-aadd-450ee0499559" 00:10:26.482 ], 00:10:26.482 "product_name": "Raid Volume", 00:10:26.482 "block_size": 512, 00:10:26.482 "num_blocks": 190464, 00:10:26.482 "uuid": "04ff9784-2538-429b-aadd-450ee0499559", 00:10:26.482 "assigned_rate_limits": { 00:10:26.482 "rw_ios_per_sec": 0, 00:10:26.482 "rw_mbytes_per_sec": 0, 00:10:26.482 "r_mbytes_per_sec": 0, 00:10:26.482 "w_mbytes_per_sec": 0 00:10:26.482 }, 00:10:26.482 "claimed": false, 00:10:26.482 "zoned": false, 00:10:26.482 "supported_io_types": { 00:10:26.482 "read": true, 00:10:26.482 "write": true, 00:10:26.482 "unmap": true, 00:10:26.482 "flush": true, 00:10:26.482 "reset": true, 00:10:26.482 "nvme_admin": false, 00:10:26.482 "nvme_io": false, 00:10:26.482 "nvme_io_md": false, 00:10:26.482 "write_zeroes": true, 00:10:26.482 "zcopy": false, 00:10:26.482 "get_zone_info": false, 00:10:26.482 "zone_management": false, 00:10:26.482 "zone_append": false, 00:10:26.482 "compare": false, 00:10:26.482 "compare_and_write": false, 00:10:26.482 "abort": false, 00:10:26.482 "seek_hole": false, 00:10:26.482 "seek_data": false, 00:10:26.482 "copy": false, 00:10:26.482 "nvme_iov_md": false 00:10:26.482 }, 00:10:26.482 "memory_domains": [ 00:10:26.482 { 00:10:26.482 "dma_device_id": "system", 00:10:26.482 "dma_device_type": 1 00:10:26.482 }, 00:10:26.482 { 00:10:26.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.482 "dma_device_type": 2 00:10:26.482 }, 00:10:26.482 { 00:10:26.482 "dma_device_id": "system", 00:10:26.482 "dma_device_type": 1 00:10:26.482 }, 00:10:26.482 { 00:10:26.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.482 "dma_device_type": 2 00:10:26.482 }, 00:10:26.482 { 00:10:26.482 "dma_device_id": "system", 00:10:26.482 "dma_device_type": 1 00:10:26.482 }, 00:10:26.482 { 00:10:26.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.482 "dma_device_type": 2 00:10:26.482 } 00:10:26.482 ], 00:10:26.482 "driver_specific": { 00:10:26.482 "raid": { 00:10:26.482 "uuid": "04ff9784-2538-429b-aadd-450ee0499559", 00:10:26.482 "strip_size_kb": 64, 00:10:26.482 "state": "online", 00:10:26.482 "raid_level": "raid0", 00:10:26.482 "superblock": true, 00:10:26.482 "num_base_bdevs": 3, 00:10:26.482 "num_base_bdevs_discovered": 3, 00:10:26.482 "num_base_bdevs_operational": 3, 00:10:26.482 "base_bdevs_list": [ 00:10:26.482 { 00:10:26.482 "name": "pt1", 00:10:26.482 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.482 "is_configured": true, 00:10:26.482 "data_offset": 2048, 00:10:26.482 "data_size": 63488 00:10:26.482 }, 00:10:26.482 { 00:10:26.482 "name": "pt2", 00:10:26.482 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.482 "is_configured": true, 00:10:26.482 "data_offset": 2048, 00:10:26.482 "data_size": 63488 00:10:26.482 }, 00:10:26.482 { 00:10:26.482 "name": "pt3", 00:10:26.482 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.482 "is_configured": true, 00:10:26.482 "data_offset": 2048, 00:10:26.482 "data_size": 63488 00:10:26.482 } 00:10:26.482 ] 00:10:26.482 } 00:10:26.482 } 00:10:26.482 }' 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:26.482 pt2 00:10:26.482 pt3' 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.482 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.742 [2024-11-05 11:26:25.826468] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 04ff9784-2538-429b-aadd-450ee0499559 '!=' 04ff9784-2538-429b-aadd-450ee0499559 ']' 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65196 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 65196 ']' 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 65196 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65196 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:26.742 killing process with pid 65196 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65196' 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 65196 00:10:26.742 [2024-11-05 11:26:25.906839] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.742 [2024-11-05 11:26:25.906974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.742 11:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 65196 00:10:26.742 [2024-11-05 11:26:25.907036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.742 [2024-11-05 11:26:25.907048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:27.001 [2024-11-05 11:26:26.204292] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.380 11:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:28.380 00:10:28.380 real 0m5.317s 00:10:28.380 user 0m7.634s 00:10:28.380 sys 0m0.940s 00:10:28.380 ************************************ 00:10:28.380 END TEST raid_superblock_test 00:10:28.380 ************************************ 00:10:28.380 11:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:28.380 11:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.380 11:26:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:28.380 11:26:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:28.380 11:26:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:28.380 11:26:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.380 ************************************ 00:10:28.380 START TEST raid_read_error_test 00:10:28.380 ************************************ 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VDFjM0tFjL 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65450 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65450 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65450 ']' 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:28.380 11:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.381 [2024-11-05 11:26:27.495644] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:10:28.381 [2024-11-05 11:26:27.495752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65450 ] 00:10:28.640 [2024-11-05 11:26:27.671582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.640 [2024-11-05 11:26:27.788103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.902 [2024-11-05 11:26:27.983426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.902 [2024-11-05 11:26:27.983458] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.161 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:29.161 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:29.161 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.162 BaseBdev1_malloc 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.162 true 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.162 [2024-11-05 11:26:28.417909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:29.162 [2024-11-05 11:26:28.417980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.162 [2024-11-05 11:26:28.418001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:29.162 [2024-11-05 11:26:28.418012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.162 [2024-11-05 11:26:28.420162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.162 [2024-11-05 11:26:28.420199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:29.162 BaseBdev1 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.162 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.421 BaseBdev2_malloc 00:10:29.421 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.421 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:29.421 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.421 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.421 true 00:10:29.421 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.421 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:29.421 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.421 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 [2024-11-05 11:26:28.484836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:29.422 [2024-11-05 11:26:28.484909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.422 [2024-11-05 11:26:28.484945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:29.422 [2024-11-05 11:26:28.484956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.422 [2024-11-05 11:26:28.487011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.422 [2024-11-05 11:26:28.487123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:29.422 BaseBdev2 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 BaseBdev3_malloc 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 true 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 [2024-11-05 11:26:28.562625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:29.422 [2024-11-05 11:26:28.562733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.422 [2024-11-05 11:26:28.562754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:29.422 [2024-11-05 11:26:28.562765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.422 [2024-11-05 11:26:28.564969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.422 [2024-11-05 11:26:28.565012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:29.422 BaseBdev3 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 [2024-11-05 11:26:28.574748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.422 [2024-11-05 11:26:28.576747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.422 [2024-11-05 11:26:28.576832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.422 [2024-11-05 11:26:28.577034] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:29.422 [2024-11-05 11:26:28.577049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:29.422 [2024-11-05 11:26:28.577368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:29.422 [2024-11-05 11:26:28.577528] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:29.422 [2024-11-05 11:26:28.577551] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:29.422 [2024-11-05 11:26:28.577727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.422 "name": "raid_bdev1", 00:10:29.422 "uuid": "b3bc4491-8f4b-4f29-8fb6-1fa3359ce6c9", 00:10:29.422 "strip_size_kb": 64, 00:10:29.422 "state": "online", 00:10:29.422 "raid_level": "raid0", 00:10:29.422 "superblock": true, 00:10:29.422 "num_base_bdevs": 3, 00:10:29.422 "num_base_bdevs_discovered": 3, 00:10:29.422 "num_base_bdevs_operational": 3, 00:10:29.422 "base_bdevs_list": [ 00:10:29.422 { 00:10:29.422 "name": "BaseBdev1", 00:10:29.422 "uuid": "be2c2a72-a7fc-5084-b1b4-b3fd601aee7a", 00:10:29.422 "is_configured": true, 00:10:29.422 "data_offset": 2048, 00:10:29.422 "data_size": 63488 00:10:29.422 }, 00:10:29.422 { 00:10:29.422 "name": "BaseBdev2", 00:10:29.422 "uuid": "e8663b87-c37b-5ffd-afb0-f07b7543f9cd", 00:10:29.422 "is_configured": true, 00:10:29.422 "data_offset": 2048, 00:10:29.422 "data_size": 63488 00:10:29.422 }, 00:10:29.422 { 00:10:29.422 "name": "BaseBdev3", 00:10:29.422 "uuid": "2cc6c7fa-5f0b-5642-b167-cd0422fe3add", 00:10:29.422 "is_configured": true, 00:10:29.422 "data_offset": 2048, 00:10:29.422 "data_size": 63488 00:10:29.422 } 00:10:29.422 ] 00:10:29.422 }' 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.422 11:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.990 11:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:29.990 11:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:29.990 [2024-11-05 11:26:29.119182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.934 "name": "raid_bdev1", 00:10:30.934 "uuid": "b3bc4491-8f4b-4f29-8fb6-1fa3359ce6c9", 00:10:30.934 "strip_size_kb": 64, 00:10:30.934 "state": "online", 00:10:30.934 "raid_level": "raid0", 00:10:30.934 "superblock": true, 00:10:30.934 "num_base_bdevs": 3, 00:10:30.934 "num_base_bdevs_discovered": 3, 00:10:30.934 "num_base_bdevs_operational": 3, 00:10:30.934 "base_bdevs_list": [ 00:10:30.934 { 00:10:30.934 "name": "BaseBdev1", 00:10:30.934 "uuid": "be2c2a72-a7fc-5084-b1b4-b3fd601aee7a", 00:10:30.934 "is_configured": true, 00:10:30.934 "data_offset": 2048, 00:10:30.934 "data_size": 63488 00:10:30.934 }, 00:10:30.934 { 00:10:30.934 "name": "BaseBdev2", 00:10:30.934 "uuid": "e8663b87-c37b-5ffd-afb0-f07b7543f9cd", 00:10:30.934 "is_configured": true, 00:10:30.934 "data_offset": 2048, 00:10:30.934 "data_size": 63488 00:10:30.934 }, 00:10:30.934 { 00:10:30.934 "name": "BaseBdev3", 00:10:30.934 "uuid": "2cc6c7fa-5f0b-5642-b167-cd0422fe3add", 00:10:30.934 "is_configured": true, 00:10:30.934 "data_offset": 2048, 00:10:30.934 "data_size": 63488 00:10:30.934 } 00:10:30.934 ] 00:10:30.934 }' 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.934 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.503 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:31.503 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.503 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.503 [2024-11-05 11:26:30.517945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:31.503 [2024-11-05 11:26:30.518044] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:31.503 [2024-11-05 11:26:30.520605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.503 [2024-11-05 11:26:30.520689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.503 [2024-11-05 11:26:30.520745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:31.503 [2024-11-05 11:26:30.520785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:31.503 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.503 { 00:10:31.503 "results": [ 00:10:31.503 { 00:10:31.503 "job": "raid_bdev1", 00:10:31.503 "core_mask": "0x1", 00:10:31.503 "workload": "randrw", 00:10:31.503 "percentage": 50, 00:10:31.503 "status": "finished", 00:10:31.503 "queue_depth": 1, 00:10:31.503 "io_size": 131072, 00:10:31.503 "runtime": 1.399731, 00:10:31.503 "iops": 15587.280698934295, 00:10:31.503 "mibps": 1948.410087366787, 00:10:31.503 "io_failed": 1, 00:10:31.503 "io_timeout": 0, 00:10:31.503 "avg_latency_us": 89.26956022264157, 00:10:31.503 "min_latency_us": 26.1589519650655, 00:10:31.503 "max_latency_us": 1473.844541484716 00:10:31.503 } 00:10:31.503 ], 00:10:31.503 "core_count": 1 00:10:31.503 } 00:10:31.503 11:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65450 00:10:31.503 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65450 ']' 00:10:31.503 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65450 00:10:31.503 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:31.503 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:31.503 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65450 00:10:31.503 killing process with pid 65450 00:10:31.503 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:31.503 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:31.503 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65450' 00:10:31.503 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65450 00:10:31.503 [2024-11-05 11:26:30.564203] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.503 11:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65450 00:10:31.762 [2024-11-05 11:26:30.795562] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.699 11:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:32.699 11:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VDFjM0tFjL 00:10:32.699 11:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:32.958 11:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:32.958 11:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:32.958 11:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.958 11:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:32.958 11:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:32.958 ************************************ 00:10:32.958 END TEST raid_read_error_test 00:10:32.958 ************************************ 00:10:32.958 00:10:32.958 real 0m4.593s 00:10:32.958 user 0m5.450s 00:10:32.958 sys 0m0.615s 00:10:32.958 11:26:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:32.958 11:26:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.958 11:26:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:32.958 11:26:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:32.958 11:26:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:32.958 11:26:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.958 ************************************ 00:10:32.958 START TEST raid_write_error_test 00:10:32.958 ************************************ 00:10:32.958 11:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:10:32.958 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7AIcgthxmZ 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65601 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65601 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65601 ']' 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:32.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:32.959 11:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.959 [2024-11-05 11:26:32.150037] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:10:32.959 [2024-11-05 11:26:32.150172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65601 ] 00:10:33.218 [2024-11-05 11:26:32.325674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.218 [2024-11-05 11:26:32.443008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.476 [2024-11-05 11:26:32.643753] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.476 [2024-11-05 11:26:32.643798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.736 11:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:33.736 11:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:33.736 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.736 11:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:33.736 11:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.736 11:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.996 BaseBdev1_malloc 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.996 true 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.996 [2024-11-05 11:26:33.041674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:33.996 [2024-11-05 11:26:33.041738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.996 [2024-11-05 11:26:33.041760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:33.996 [2024-11-05 11:26:33.041773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.996 [2024-11-05 11:26:33.044254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.996 [2024-11-05 11:26:33.044352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:33.996 BaseBdev1 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.996 BaseBdev2_malloc 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.996 true 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.996 [2024-11-05 11:26:33.111739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:33.996 [2024-11-05 11:26:33.111865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.996 [2024-11-05 11:26:33.111907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:33.996 [2024-11-05 11:26:33.111941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.996 [2024-11-05 11:26:33.114260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.996 [2024-11-05 11:26:33.114342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:33.996 BaseBdev2 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.996 BaseBdev3_malloc 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.996 true 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.996 [2024-11-05 11:26:33.193967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:33.996 [2024-11-05 11:26:33.194048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.996 [2024-11-05 11:26:33.194071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:33.996 [2024-11-05 11:26:33.194081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.996 [2024-11-05 11:26:33.196350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.996 [2024-11-05 11:26:33.196400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:33.996 BaseBdev3 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.996 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.996 [2024-11-05 11:26:33.206004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.996 [2024-11-05 11:26:33.207869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.996 [2024-11-05 11:26:33.208034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.996 [2024-11-05 11:26:33.208282] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:33.996 [2024-11-05 11:26:33.208297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:33.996 [2024-11-05 11:26:33.208573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:33.996 [2024-11-05 11:26:33.208744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:33.996 [2024-11-05 11:26:33.208757] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:33.996 [2024-11-05 11:26:33.208945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.997 "name": "raid_bdev1", 00:10:33.997 "uuid": "61b7e331-3821-4148-a2ca-e6376f4d9666", 00:10:33.997 "strip_size_kb": 64, 00:10:33.997 "state": "online", 00:10:33.997 "raid_level": "raid0", 00:10:33.997 "superblock": true, 00:10:33.997 "num_base_bdevs": 3, 00:10:33.997 "num_base_bdevs_discovered": 3, 00:10:33.997 "num_base_bdevs_operational": 3, 00:10:33.997 "base_bdevs_list": [ 00:10:33.997 { 00:10:33.997 "name": "BaseBdev1", 00:10:33.997 "uuid": "97aef40e-2d06-537d-9643-c0dc1bceb308", 00:10:33.997 "is_configured": true, 00:10:33.997 "data_offset": 2048, 00:10:33.997 "data_size": 63488 00:10:33.997 }, 00:10:33.997 { 00:10:33.997 "name": "BaseBdev2", 00:10:33.997 "uuid": "7d3923c1-d556-56e0-8787-0564c2cdb777", 00:10:33.997 "is_configured": true, 00:10:33.997 "data_offset": 2048, 00:10:33.997 "data_size": 63488 00:10:33.997 }, 00:10:33.997 { 00:10:33.997 "name": "BaseBdev3", 00:10:33.997 "uuid": "c49582c6-c22f-5776-a675-78a2b1340e86", 00:10:33.997 "is_configured": true, 00:10:33.997 "data_offset": 2048, 00:10:33.997 "data_size": 63488 00:10:33.997 } 00:10:33.997 ] 00:10:33.997 }' 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.997 11:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.566 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:34.566 11:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:34.566 [2024-11-05 11:26:33.718430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.503 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.503 "name": "raid_bdev1", 00:10:35.504 "uuid": "61b7e331-3821-4148-a2ca-e6376f4d9666", 00:10:35.504 "strip_size_kb": 64, 00:10:35.504 "state": "online", 00:10:35.504 "raid_level": "raid0", 00:10:35.504 "superblock": true, 00:10:35.504 "num_base_bdevs": 3, 00:10:35.504 "num_base_bdevs_discovered": 3, 00:10:35.504 "num_base_bdevs_operational": 3, 00:10:35.504 "base_bdevs_list": [ 00:10:35.504 { 00:10:35.504 "name": "BaseBdev1", 00:10:35.504 "uuid": "97aef40e-2d06-537d-9643-c0dc1bceb308", 00:10:35.504 "is_configured": true, 00:10:35.504 "data_offset": 2048, 00:10:35.504 "data_size": 63488 00:10:35.504 }, 00:10:35.504 { 00:10:35.504 "name": "BaseBdev2", 00:10:35.504 "uuid": "7d3923c1-d556-56e0-8787-0564c2cdb777", 00:10:35.504 "is_configured": true, 00:10:35.504 "data_offset": 2048, 00:10:35.504 "data_size": 63488 00:10:35.504 }, 00:10:35.504 { 00:10:35.504 "name": "BaseBdev3", 00:10:35.504 "uuid": "c49582c6-c22f-5776-a675-78a2b1340e86", 00:10:35.504 "is_configured": true, 00:10:35.504 "data_offset": 2048, 00:10:35.504 "data_size": 63488 00:10:35.504 } 00:10:35.504 ] 00:10:35.504 }' 00:10:35.504 11:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.504 11:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.073 11:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:36.073 11:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.073 11:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.073 [2024-11-05 11:26:35.090572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.073 [2024-11-05 11:26:35.090604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.073 [2024-11-05 11:26:35.093474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.073 { 00:10:36.073 "results": [ 00:10:36.073 { 00:10:36.073 "job": "raid_bdev1", 00:10:36.073 "core_mask": "0x1", 00:10:36.073 "workload": "randrw", 00:10:36.073 "percentage": 50, 00:10:36.073 "status": "finished", 00:10:36.073 "queue_depth": 1, 00:10:36.073 "io_size": 131072, 00:10:36.073 "runtime": 1.372952, 00:10:36.073 "iops": 15449.921046038025, 00:10:36.073 "mibps": 1931.2401307547532, 00:10:36.073 "io_failed": 1, 00:10:36.073 "io_timeout": 0, 00:10:36.073 "avg_latency_us": 89.9267812417079, 00:10:36.073 "min_latency_us": 20.90480349344978, 00:10:36.073 "max_latency_us": 1602.6270742358079 00:10:36.073 } 00:10:36.073 ], 00:10:36.073 "core_count": 1 00:10:36.073 } 00:10:36.073 [2024-11-05 11:26:35.093612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.073 [2024-11-05 11:26:35.093660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.073 [2024-11-05 11:26:35.093673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:36.073 11:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.073 11:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65601 00:10:36.073 11:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65601 ']' 00:10:36.073 11:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65601 00:10:36.073 11:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:36.073 11:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:36.073 11:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65601 00:10:36.073 killing process with pid 65601 00:10:36.073 11:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:36.073 11:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:36.073 11:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65601' 00:10:36.073 11:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65601 00:10:36.073 11:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65601 00:10:36.073 [2024-11-05 11:26:35.135898] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.332 [2024-11-05 11:26:35.383459] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.749 11:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7AIcgthxmZ 00:10:37.749 11:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:37.749 11:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:37.749 11:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:37.749 11:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:37.749 11:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:37.749 11:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:37.749 11:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:37.749 00:10:37.749 real 0m4.558s 00:10:37.749 user 0m5.377s 00:10:37.749 sys 0m0.548s 00:10:37.749 11:26:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:37.749 11:26:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.749 ************************************ 00:10:37.749 END TEST raid_write_error_test 00:10:37.749 ************************************ 00:10:37.749 11:26:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:37.749 11:26:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:37.749 11:26:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:37.749 11:26:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:37.749 11:26:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.749 ************************************ 00:10:37.749 START TEST raid_state_function_test 00:10:37.749 ************************************ 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65739 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:37.749 Process raid pid: 65739 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65739' 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65739 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65739 ']' 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:37.749 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.749 [2024-11-05 11:26:36.791238] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:10:37.749 [2024-11-05 11:26:36.791906] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.749 [2024-11-05 11:26:36.956094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.041 [2024-11-05 11:26:37.075811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.041 [2024-11-05 11:26:37.277533] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.041 [2024-11-05 11:26:37.277643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.610 [2024-11-05 11:26:37.621786] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.610 [2024-11-05 11:26:37.621904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.610 [2024-11-05 11:26:37.621936] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.610 [2024-11-05 11:26:37.621960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.610 [2024-11-05 11:26:37.621978] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.610 [2024-11-05 11:26:37.621999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.610 "name": "Existed_Raid", 00:10:38.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.610 "strip_size_kb": 64, 00:10:38.610 "state": "configuring", 00:10:38.610 "raid_level": "concat", 00:10:38.610 "superblock": false, 00:10:38.610 "num_base_bdevs": 3, 00:10:38.610 "num_base_bdevs_discovered": 0, 00:10:38.610 "num_base_bdevs_operational": 3, 00:10:38.610 "base_bdevs_list": [ 00:10:38.610 { 00:10:38.610 "name": "BaseBdev1", 00:10:38.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.610 "is_configured": false, 00:10:38.610 "data_offset": 0, 00:10:38.610 "data_size": 0 00:10:38.610 }, 00:10:38.610 { 00:10:38.610 "name": "BaseBdev2", 00:10:38.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.610 "is_configured": false, 00:10:38.610 "data_offset": 0, 00:10:38.610 "data_size": 0 00:10:38.610 }, 00:10:38.610 { 00:10:38.610 "name": "BaseBdev3", 00:10:38.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.610 "is_configured": false, 00:10:38.610 "data_offset": 0, 00:10:38.610 "data_size": 0 00:10:38.610 } 00:10:38.610 ] 00:10:38.610 }' 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.610 11:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.870 [2024-11-05 11:26:38.053002] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.870 [2024-11-05 11:26:38.053095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.870 [2024-11-05 11:26:38.060991] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.870 [2024-11-05 11:26:38.061035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.870 [2024-11-05 11:26:38.061045] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.870 [2024-11-05 11:26:38.061054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.870 [2024-11-05 11:26:38.061060] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.870 [2024-11-05 11:26:38.061069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.870 [2024-11-05 11:26:38.103804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.870 BaseBdev1 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.870 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.870 [ 00:10:38.870 { 00:10:38.870 "name": "BaseBdev1", 00:10:38.870 "aliases": [ 00:10:38.870 "5b72740e-edf5-4122-a912-9fec8cd3c7b1" 00:10:38.870 ], 00:10:38.870 "product_name": "Malloc disk", 00:10:38.870 "block_size": 512, 00:10:38.870 "num_blocks": 65536, 00:10:38.870 "uuid": "5b72740e-edf5-4122-a912-9fec8cd3c7b1", 00:10:38.870 "assigned_rate_limits": { 00:10:38.870 "rw_ios_per_sec": 0, 00:10:38.870 "rw_mbytes_per_sec": 0, 00:10:38.870 "r_mbytes_per_sec": 0, 00:10:38.870 "w_mbytes_per_sec": 0 00:10:38.870 }, 00:10:38.870 "claimed": true, 00:10:38.870 "claim_type": "exclusive_write", 00:10:38.870 "zoned": false, 00:10:38.870 "supported_io_types": { 00:10:38.870 "read": true, 00:10:38.870 "write": true, 00:10:38.870 "unmap": true, 00:10:38.871 "flush": true, 00:10:38.871 "reset": true, 00:10:38.871 "nvme_admin": false, 00:10:38.871 "nvme_io": false, 00:10:38.871 "nvme_io_md": false, 00:10:38.871 "write_zeroes": true, 00:10:38.871 "zcopy": true, 00:10:38.871 "get_zone_info": false, 00:10:38.871 "zone_management": false, 00:10:38.871 "zone_append": false, 00:10:38.871 "compare": false, 00:10:38.871 "compare_and_write": false, 00:10:38.871 "abort": true, 00:10:38.871 "seek_hole": false, 00:10:38.871 "seek_data": false, 00:10:38.871 "copy": true, 00:10:38.871 "nvme_iov_md": false 00:10:38.871 }, 00:10:38.871 "memory_domains": [ 00:10:38.871 { 00:10:38.871 "dma_device_id": "system", 00:10:38.871 "dma_device_type": 1 00:10:38.871 }, 00:10:38.871 { 00:10:38.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.871 "dma_device_type": 2 00:10:38.871 } 00:10:38.871 ], 00:10:38.871 "driver_specific": {} 00:10:38.871 } 00:10:38.871 ] 00:10:38.871 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.871 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:38.871 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:38.871 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.871 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.871 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.871 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.871 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.871 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.871 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.871 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.871 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.130 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.130 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.130 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.130 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.130 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.130 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.130 "name": "Existed_Raid", 00:10:39.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.130 "strip_size_kb": 64, 00:10:39.130 "state": "configuring", 00:10:39.130 "raid_level": "concat", 00:10:39.130 "superblock": false, 00:10:39.130 "num_base_bdevs": 3, 00:10:39.130 "num_base_bdevs_discovered": 1, 00:10:39.130 "num_base_bdevs_operational": 3, 00:10:39.130 "base_bdevs_list": [ 00:10:39.130 { 00:10:39.130 "name": "BaseBdev1", 00:10:39.130 "uuid": "5b72740e-edf5-4122-a912-9fec8cd3c7b1", 00:10:39.130 "is_configured": true, 00:10:39.130 "data_offset": 0, 00:10:39.130 "data_size": 65536 00:10:39.130 }, 00:10:39.130 { 00:10:39.130 "name": "BaseBdev2", 00:10:39.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.130 "is_configured": false, 00:10:39.130 "data_offset": 0, 00:10:39.130 "data_size": 0 00:10:39.130 }, 00:10:39.130 { 00:10:39.130 "name": "BaseBdev3", 00:10:39.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.130 "is_configured": false, 00:10:39.130 "data_offset": 0, 00:10:39.130 "data_size": 0 00:10:39.130 } 00:10:39.130 ] 00:10:39.130 }' 00:10:39.130 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.130 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.390 [2024-11-05 11:26:38.575082] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.390 [2024-11-05 11:26:38.575156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.390 [2024-11-05 11:26:38.583107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.390 [2024-11-05 11:26:38.584964] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.390 [2024-11-05 11:26:38.585041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.390 [2024-11-05 11:26:38.585070] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:39.390 [2024-11-05 11:26:38.585092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.390 "name": "Existed_Raid", 00:10:39.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.390 "strip_size_kb": 64, 00:10:39.390 "state": "configuring", 00:10:39.390 "raid_level": "concat", 00:10:39.390 "superblock": false, 00:10:39.390 "num_base_bdevs": 3, 00:10:39.390 "num_base_bdevs_discovered": 1, 00:10:39.390 "num_base_bdevs_operational": 3, 00:10:39.390 "base_bdevs_list": [ 00:10:39.390 { 00:10:39.390 "name": "BaseBdev1", 00:10:39.390 "uuid": "5b72740e-edf5-4122-a912-9fec8cd3c7b1", 00:10:39.390 "is_configured": true, 00:10:39.390 "data_offset": 0, 00:10:39.390 "data_size": 65536 00:10:39.390 }, 00:10:39.390 { 00:10:39.390 "name": "BaseBdev2", 00:10:39.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.390 "is_configured": false, 00:10:39.390 "data_offset": 0, 00:10:39.390 "data_size": 0 00:10:39.390 }, 00:10:39.390 { 00:10:39.390 "name": "BaseBdev3", 00:10:39.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.390 "is_configured": false, 00:10:39.390 "data_offset": 0, 00:10:39.390 "data_size": 0 00:10:39.390 } 00:10:39.390 ] 00:10:39.390 }' 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.390 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.960 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:39.960 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.960 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.960 [2024-11-05 11:26:39.027510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.960 BaseBdev2 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.960 [ 00:10:39.960 { 00:10:39.960 "name": "BaseBdev2", 00:10:39.960 "aliases": [ 00:10:39.960 "acb5906f-3e14-48b7-99d1-9f7bc4d71768" 00:10:39.960 ], 00:10:39.960 "product_name": "Malloc disk", 00:10:39.960 "block_size": 512, 00:10:39.960 "num_blocks": 65536, 00:10:39.960 "uuid": "acb5906f-3e14-48b7-99d1-9f7bc4d71768", 00:10:39.960 "assigned_rate_limits": { 00:10:39.960 "rw_ios_per_sec": 0, 00:10:39.960 "rw_mbytes_per_sec": 0, 00:10:39.960 "r_mbytes_per_sec": 0, 00:10:39.960 "w_mbytes_per_sec": 0 00:10:39.960 }, 00:10:39.960 "claimed": true, 00:10:39.960 "claim_type": "exclusive_write", 00:10:39.960 "zoned": false, 00:10:39.960 "supported_io_types": { 00:10:39.960 "read": true, 00:10:39.960 "write": true, 00:10:39.960 "unmap": true, 00:10:39.960 "flush": true, 00:10:39.960 "reset": true, 00:10:39.960 "nvme_admin": false, 00:10:39.960 "nvme_io": false, 00:10:39.960 "nvme_io_md": false, 00:10:39.960 "write_zeroes": true, 00:10:39.960 "zcopy": true, 00:10:39.960 "get_zone_info": false, 00:10:39.960 "zone_management": false, 00:10:39.960 "zone_append": false, 00:10:39.960 "compare": false, 00:10:39.960 "compare_and_write": false, 00:10:39.960 "abort": true, 00:10:39.960 "seek_hole": false, 00:10:39.960 "seek_data": false, 00:10:39.960 "copy": true, 00:10:39.960 "nvme_iov_md": false 00:10:39.960 }, 00:10:39.960 "memory_domains": [ 00:10:39.960 { 00:10:39.960 "dma_device_id": "system", 00:10:39.960 "dma_device_type": 1 00:10:39.960 }, 00:10:39.960 { 00:10:39.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.960 "dma_device_type": 2 00:10:39.960 } 00:10:39.960 ], 00:10:39.960 "driver_specific": {} 00:10:39.960 } 00:10:39.960 ] 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.960 "name": "Existed_Raid", 00:10:39.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.960 "strip_size_kb": 64, 00:10:39.960 "state": "configuring", 00:10:39.960 "raid_level": "concat", 00:10:39.960 "superblock": false, 00:10:39.960 "num_base_bdevs": 3, 00:10:39.960 "num_base_bdevs_discovered": 2, 00:10:39.960 "num_base_bdevs_operational": 3, 00:10:39.960 "base_bdevs_list": [ 00:10:39.960 { 00:10:39.960 "name": "BaseBdev1", 00:10:39.960 "uuid": "5b72740e-edf5-4122-a912-9fec8cd3c7b1", 00:10:39.960 "is_configured": true, 00:10:39.960 "data_offset": 0, 00:10:39.960 "data_size": 65536 00:10:39.960 }, 00:10:39.960 { 00:10:39.960 "name": "BaseBdev2", 00:10:39.960 "uuid": "acb5906f-3e14-48b7-99d1-9f7bc4d71768", 00:10:39.960 "is_configured": true, 00:10:39.960 "data_offset": 0, 00:10:39.960 "data_size": 65536 00:10:39.960 }, 00:10:39.960 { 00:10:39.960 "name": "BaseBdev3", 00:10:39.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.960 "is_configured": false, 00:10:39.960 "data_offset": 0, 00:10:39.960 "data_size": 0 00:10:39.960 } 00:10:39.960 ] 00:10:39.960 }' 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.960 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.537 [2024-11-05 11:26:39.562299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.537 [2024-11-05 11:26:39.562432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:40.537 [2024-11-05 11:26:39.562452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:40.537 [2024-11-05 11:26:39.562731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:40.537 [2024-11-05 11:26:39.562908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:40.537 [2024-11-05 11:26:39.562919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:40.537 [2024-11-05 11:26:39.563233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.537 BaseBdev3 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.537 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.537 [ 00:10:40.537 { 00:10:40.537 "name": "BaseBdev3", 00:10:40.537 "aliases": [ 00:10:40.537 "8e3e8eb9-b65d-433c-ade6-0ca83dea75f7" 00:10:40.537 ], 00:10:40.537 "product_name": "Malloc disk", 00:10:40.537 "block_size": 512, 00:10:40.537 "num_blocks": 65536, 00:10:40.538 "uuid": "8e3e8eb9-b65d-433c-ade6-0ca83dea75f7", 00:10:40.538 "assigned_rate_limits": { 00:10:40.538 "rw_ios_per_sec": 0, 00:10:40.538 "rw_mbytes_per_sec": 0, 00:10:40.538 "r_mbytes_per_sec": 0, 00:10:40.538 "w_mbytes_per_sec": 0 00:10:40.538 }, 00:10:40.538 "claimed": true, 00:10:40.538 "claim_type": "exclusive_write", 00:10:40.538 "zoned": false, 00:10:40.538 "supported_io_types": { 00:10:40.538 "read": true, 00:10:40.538 "write": true, 00:10:40.538 "unmap": true, 00:10:40.538 "flush": true, 00:10:40.538 "reset": true, 00:10:40.538 "nvme_admin": false, 00:10:40.538 "nvme_io": false, 00:10:40.538 "nvme_io_md": false, 00:10:40.538 "write_zeroes": true, 00:10:40.538 "zcopy": true, 00:10:40.538 "get_zone_info": false, 00:10:40.538 "zone_management": false, 00:10:40.538 "zone_append": false, 00:10:40.538 "compare": false, 00:10:40.538 "compare_and_write": false, 00:10:40.538 "abort": true, 00:10:40.538 "seek_hole": false, 00:10:40.538 "seek_data": false, 00:10:40.538 "copy": true, 00:10:40.538 "nvme_iov_md": false 00:10:40.538 }, 00:10:40.538 "memory_domains": [ 00:10:40.538 { 00:10:40.538 "dma_device_id": "system", 00:10:40.538 "dma_device_type": 1 00:10:40.538 }, 00:10:40.538 { 00:10:40.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.538 "dma_device_type": 2 00:10:40.538 } 00:10:40.538 ], 00:10:40.538 "driver_specific": {} 00:10:40.538 } 00:10:40.538 ] 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.538 "name": "Existed_Raid", 00:10:40.538 "uuid": "d90134be-c5b4-48ac-80d6-3540890f685b", 00:10:40.538 "strip_size_kb": 64, 00:10:40.538 "state": "online", 00:10:40.538 "raid_level": "concat", 00:10:40.538 "superblock": false, 00:10:40.538 "num_base_bdevs": 3, 00:10:40.538 "num_base_bdevs_discovered": 3, 00:10:40.538 "num_base_bdevs_operational": 3, 00:10:40.538 "base_bdevs_list": [ 00:10:40.538 { 00:10:40.538 "name": "BaseBdev1", 00:10:40.538 "uuid": "5b72740e-edf5-4122-a912-9fec8cd3c7b1", 00:10:40.538 "is_configured": true, 00:10:40.538 "data_offset": 0, 00:10:40.538 "data_size": 65536 00:10:40.538 }, 00:10:40.538 { 00:10:40.538 "name": "BaseBdev2", 00:10:40.538 "uuid": "acb5906f-3e14-48b7-99d1-9f7bc4d71768", 00:10:40.538 "is_configured": true, 00:10:40.538 "data_offset": 0, 00:10:40.538 "data_size": 65536 00:10:40.538 }, 00:10:40.538 { 00:10:40.538 "name": "BaseBdev3", 00:10:40.538 "uuid": "8e3e8eb9-b65d-433c-ade6-0ca83dea75f7", 00:10:40.538 "is_configured": true, 00:10:40.538 "data_offset": 0, 00:10:40.538 "data_size": 65536 00:10:40.538 } 00:10:40.538 ] 00:10:40.538 }' 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.538 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.796 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.796 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.796 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.796 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.796 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.796 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.796 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.796 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.796 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.796 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.055 [2024-11-05 11:26:40.073854] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.055 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.055 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.055 "name": "Existed_Raid", 00:10:41.055 "aliases": [ 00:10:41.055 "d90134be-c5b4-48ac-80d6-3540890f685b" 00:10:41.055 ], 00:10:41.055 "product_name": "Raid Volume", 00:10:41.055 "block_size": 512, 00:10:41.055 "num_blocks": 196608, 00:10:41.055 "uuid": "d90134be-c5b4-48ac-80d6-3540890f685b", 00:10:41.055 "assigned_rate_limits": { 00:10:41.055 "rw_ios_per_sec": 0, 00:10:41.055 "rw_mbytes_per_sec": 0, 00:10:41.055 "r_mbytes_per_sec": 0, 00:10:41.055 "w_mbytes_per_sec": 0 00:10:41.055 }, 00:10:41.055 "claimed": false, 00:10:41.055 "zoned": false, 00:10:41.055 "supported_io_types": { 00:10:41.055 "read": true, 00:10:41.055 "write": true, 00:10:41.055 "unmap": true, 00:10:41.055 "flush": true, 00:10:41.055 "reset": true, 00:10:41.055 "nvme_admin": false, 00:10:41.055 "nvme_io": false, 00:10:41.055 "nvme_io_md": false, 00:10:41.055 "write_zeroes": true, 00:10:41.055 "zcopy": false, 00:10:41.055 "get_zone_info": false, 00:10:41.055 "zone_management": false, 00:10:41.055 "zone_append": false, 00:10:41.055 "compare": false, 00:10:41.055 "compare_and_write": false, 00:10:41.055 "abort": false, 00:10:41.055 "seek_hole": false, 00:10:41.055 "seek_data": false, 00:10:41.055 "copy": false, 00:10:41.055 "nvme_iov_md": false 00:10:41.055 }, 00:10:41.055 "memory_domains": [ 00:10:41.055 { 00:10:41.055 "dma_device_id": "system", 00:10:41.055 "dma_device_type": 1 00:10:41.055 }, 00:10:41.055 { 00:10:41.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.055 "dma_device_type": 2 00:10:41.055 }, 00:10:41.055 { 00:10:41.055 "dma_device_id": "system", 00:10:41.055 "dma_device_type": 1 00:10:41.055 }, 00:10:41.055 { 00:10:41.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.055 "dma_device_type": 2 00:10:41.055 }, 00:10:41.055 { 00:10:41.055 "dma_device_id": "system", 00:10:41.055 "dma_device_type": 1 00:10:41.055 }, 00:10:41.055 { 00:10:41.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.055 "dma_device_type": 2 00:10:41.055 } 00:10:41.055 ], 00:10:41.055 "driver_specific": { 00:10:41.055 "raid": { 00:10:41.055 "uuid": "d90134be-c5b4-48ac-80d6-3540890f685b", 00:10:41.055 "strip_size_kb": 64, 00:10:41.055 "state": "online", 00:10:41.055 "raid_level": "concat", 00:10:41.055 "superblock": false, 00:10:41.055 "num_base_bdevs": 3, 00:10:41.055 "num_base_bdevs_discovered": 3, 00:10:41.055 "num_base_bdevs_operational": 3, 00:10:41.055 "base_bdevs_list": [ 00:10:41.055 { 00:10:41.055 "name": "BaseBdev1", 00:10:41.055 "uuid": "5b72740e-edf5-4122-a912-9fec8cd3c7b1", 00:10:41.055 "is_configured": true, 00:10:41.055 "data_offset": 0, 00:10:41.055 "data_size": 65536 00:10:41.055 }, 00:10:41.055 { 00:10:41.055 "name": "BaseBdev2", 00:10:41.055 "uuid": "acb5906f-3e14-48b7-99d1-9f7bc4d71768", 00:10:41.055 "is_configured": true, 00:10:41.055 "data_offset": 0, 00:10:41.055 "data_size": 65536 00:10:41.055 }, 00:10:41.055 { 00:10:41.055 "name": "BaseBdev3", 00:10:41.055 "uuid": "8e3e8eb9-b65d-433c-ade6-0ca83dea75f7", 00:10:41.055 "is_configured": true, 00:10:41.055 "data_offset": 0, 00:10:41.055 "data_size": 65536 00:10:41.055 } 00:10:41.055 ] 00:10:41.055 } 00:10:41.055 } 00:10:41.055 }' 00:10:41.055 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.055 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:41.056 BaseBdev2 00:10:41.056 BaseBdev3' 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.056 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.315 [2024-11-05 11:26:40.353158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.315 [2024-11-05 11:26:40.353189] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.315 [2024-11-05 11:26:40.353252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.315 "name": "Existed_Raid", 00:10:41.315 "uuid": "d90134be-c5b4-48ac-80d6-3540890f685b", 00:10:41.315 "strip_size_kb": 64, 00:10:41.315 "state": "offline", 00:10:41.315 "raid_level": "concat", 00:10:41.315 "superblock": false, 00:10:41.315 "num_base_bdevs": 3, 00:10:41.315 "num_base_bdevs_discovered": 2, 00:10:41.315 "num_base_bdevs_operational": 2, 00:10:41.315 "base_bdevs_list": [ 00:10:41.315 { 00:10:41.315 "name": null, 00:10:41.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.315 "is_configured": false, 00:10:41.315 "data_offset": 0, 00:10:41.315 "data_size": 65536 00:10:41.315 }, 00:10:41.315 { 00:10:41.315 "name": "BaseBdev2", 00:10:41.315 "uuid": "acb5906f-3e14-48b7-99d1-9f7bc4d71768", 00:10:41.315 "is_configured": true, 00:10:41.315 "data_offset": 0, 00:10:41.315 "data_size": 65536 00:10:41.315 }, 00:10:41.315 { 00:10:41.315 "name": "BaseBdev3", 00:10:41.315 "uuid": "8e3e8eb9-b65d-433c-ade6-0ca83dea75f7", 00:10:41.315 "is_configured": true, 00:10:41.315 "data_offset": 0, 00:10:41.315 "data_size": 65536 00:10:41.315 } 00:10:41.315 ] 00:10:41.315 }' 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.315 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.884 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:41.884 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.884 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.884 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.884 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.884 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.884 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.884 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.884 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.884 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:41.884 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.884 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.884 [2024-11-05 11:26:40.906525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.884 [2024-11-05 11:26:41.060288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:41.884 [2024-11-05 11:26:41.060350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.884 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.143 BaseBdev2 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.143 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.143 [ 00:10:42.143 { 00:10:42.143 "name": "BaseBdev2", 00:10:42.143 "aliases": [ 00:10:42.143 "5f0e4838-14d0-41a5-9af3-56de9ffec697" 00:10:42.143 ], 00:10:42.143 "product_name": "Malloc disk", 00:10:42.143 "block_size": 512, 00:10:42.143 "num_blocks": 65536, 00:10:42.143 "uuid": "5f0e4838-14d0-41a5-9af3-56de9ffec697", 00:10:42.143 "assigned_rate_limits": { 00:10:42.143 "rw_ios_per_sec": 0, 00:10:42.143 "rw_mbytes_per_sec": 0, 00:10:42.143 "r_mbytes_per_sec": 0, 00:10:42.143 "w_mbytes_per_sec": 0 00:10:42.143 }, 00:10:42.143 "claimed": false, 00:10:42.143 "zoned": false, 00:10:42.143 "supported_io_types": { 00:10:42.143 "read": true, 00:10:42.143 "write": true, 00:10:42.143 "unmap": true, 00:10:42.143 "flush": true, 00:10:42.143 "reset": true, 00:10:42.143 "nvme_admin": false, 00:10:42.143 "nvme_io": false, 00:10:42.143 "nvme_io_md": false, 00:10:42.143 "write_zeroes": true, 00:10:42.143 "zcopy": true, 00:10:42.143 "get_zone_info": false, 00:10:42.143 "zone_management": false, 00:10:42.143 "zone_append": false, 00:10:42.143 "compare": false, 00:10:42.143 "compare_and_write": false, 00:10:42.144 "abort": true, 00:10:42.144 "seek_hole": false, 00:10:42.144 "seek_data": false, 00:10:42.144 "copy": true, 00:10:42.144 "nvme_iov_md": false 00:10:42.144 }, 00:10:42.144 "memory_domains": [ 00:10:42.144 { 00:10:42.144 "dma_device_id": "system", 00:10:42.144 "dma_device_type": 1 00:10:42.144 }, 00:10:42.144 { 00:10:42.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.144 "dma_device_type": 2 00:10:42.144 } 00:10:42.144 ], 00:10:42.144 "driver_specific": {} 00:10:42.144 } 00:10:42.144 ] 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.144 BaseBdev3 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.144 [ 00:10:42.144 { 00:10:42.144 "name": "BaseBdev3", 00:10:42.144 "aliases": [ 00:10:42.144 "d7b7e846-64f2-44f8-aaae-44f76bca4dd4" 00:10:42.144 ], 00:10:42.144 "product_name": "Malloc disk", 00:10:42.144 "block_size": 512, 00:10:42.144 "num_blocks": 65536, 00:10:42.144 "uuid": "d7b7e846-64f2-44f8-aaae-44f76bca4dd4", 00:10:42.144 "assigned_rate_limits": { 00:10:42.144 "rw_ios_per_sec": 0, 00:10:42.144 "rw_mbytes_per_sec": 0, 00:10:42.144 "r_mbytes_per_sec": 0, 00:10:42.144 "w_mbytes_per_sec": 0 00:10:42.144 }, 00:10:42.144 "claimed": false, 00:10:42.144 "zoned": false, 00:10:42.144 "supported_io_types": { 00:10:42.144 "read": true, 00:10:42.144 "write": true, 00:10:42.144 "unmap": true, 00:10:42.144 "flush": true, 00:10:42.144 "reset": true, 00:10:42.144 "nvme_admin": false, 00:10:42.144 "nvme_io": false, 00:10:42.144 "nvme_io_md": false, 00:10:42.144 "write_zeroes": true, 00:10:42.144 "zcopy": true, 00:10:42.144 "get_zone_info": false, 00:10:42.144 "zone_management": false, 00:10:42.144 "zone_append": false, 00:10:42.144 "compare": false, 00:10:42.144 "compare_and_write": false, 00:10:42.144 "abort": true, 00:10:42.144 "seek_hole": false, 00:10:42.144 "seek_data": false, 00:10:42.144 "copy": true, 00:10:42.144 "nvme_iov_md": false 00:10:42.144 }, 00:10:42.144 "memory_domains": [ 00:10:42.144 { 00:10:42.144 "dma_device_id": "system", 00:10:42.144 "dma_device_type": 1 00:10:42.144 }, 00:10:42.144 { 00:10:42.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.144 "dma_device_type": 2 00:10:42.144 } 00:10:42.144 ], 00:10:42.144 "driver_specific": {} 00:10:42.144 } 00:10:42.144 ] 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.144 [2024-11-05 11:26:41.366283] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.144 [2024-11-05 11:26:41.366368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.144 [2024-11-05 11:26:41.366411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.144 [2024-11-05 11:26:41.368119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.144 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.403 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.403 "name": "Existed_Raid", 00:10:42.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.403 "strip_size_kb": 64, 00:10:42.403 "state": "configuring", 00:10:42.403 "raid_level": "concat", 00:10:42.403 "superblock": false, 00:10:42.403 "num_base_bdevs": 3, 00:10:42.403 "num_base_bdevs_discovered": 2, 00:10:42.403 "num_base_bdevs_operational": 3, 00:10:42.403 "base_bdevs_list": [ 00:10:42.403 { 00:10:42.403 "name": "BaseBdev1", 00:10:42.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.403 "is_configured": false, 00:10:42.403 "data_offset": 0, 00:10:42.403 "data_size": 0 00:10:42.403 }, 00:10:42.403 { 00:10:42.403 "name": "BaseBdev2", 00:10:42.403 "uuid": "5f0e4838-14d0-41a5-9af3-56de9ffec697", 00:10:42.403 "is_configured": true, 00:10:42.403 "data_offset": 0, 00:10:42.403 "data_size": 65536 00:10:42.403 }, 00:10:42.403 { 00:10:42.403 "name": "BaseBdev3", 00:10:42.403 "uuid": "d7b7e846-64f2-44f8-aaae-44f76bca4dd4", 00:10:42.403 "is_configured": true, 00:10:42.403 "data_offset": 0, 00:10:42.403 "data_size": 65536 00:10:42.403 } 00:10:42.403 ] 00:10:42.403 }' 00:10:42.403 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.403 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.663 [2024-11-05 11:26:41.797549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.663 "name": "Existed_Raid", 00:10:42.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.663 "strip_size_kb": 64, 00:10:42.663 "state": "configuring", 00:10:42.663 "raid_level": "concat", 00:10:42.663 "superblock": false, 00:10:42.663 "num_base_bdevs": 3, 00:10:42.663 "num_base_bdevs_discovered": 1, 00:10:42.663 "num_base_bdevs_operational": 3, 00:10:42.663 "base_bdevs_list": [ 00:10:42.663 { 00:10:42.663 "name": "BaseBdev1", 00:10:42.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.663 "is_configured": false, 00:10:42.663 "data_offset": 0, 00:10:42.663 "data_size": 0 00:10:42.663 }, 00:10:42.663 { 00:10:42.663 "name": null, 00:10:42.663 "uuid": "5f0e4838-14d0-41a5-9af3-56de9ffec697", 00:10:42.663 "is_configured": false, 00:10:42.663 "data_offset": 0, 00:10:42.663 "data_size": 65536 00:10:42.663 }, 00:10:42.663 { 00:10:42.663 "name": "BaseBdev3", 00:10:42.663 "uuid": "d7b7e846-64f2-44f8-aaae-44f76bca4dd4", 00:10:42.663 "is_configured": true, 00:10:42.663 "data_offset": 0, 00:10:42.663 "data_size": 65536 00:10:42.663 } 00:10:42.663 ] 00:10:42.663 }' 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.663 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.957 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.217 [2024-11-05 11:26:42.298680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.217 BaseBdev1 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.217 [ 00:10:43.217 { 00:10:43.217 "name": "BaseBdev1", 00:10:43.217 "aliases": [ 00:10:43.217 "0b468515-cc2c-444c-8e72-813b134b6683" 00:10:43.217 ], 00:10:43.217 "product_name": "Malloc disk", 00:10:43.217 "block_size": 512, 00:10:43.217 "num_blocks": 65536, 00:10:43.217 "uuid": "0b468515-cc2c-444c-8e72-813b134b6683", 00:10:43.217 "assigned_rate_limits": { 00:10:43.217 "rw_ios_per_sec": 0, 00:10:43.217 "rw_mbytes_per_sec": 0, 00:10:43.217 "r_mbytes_per_sec": 0, 00:10:43.217 "w_mbytes_per_sec": 0 00:10:43.217 }, 00:10:43.217 "claimed": true, 00:10:43.217 "claim_type": "exclusive_write", 00:10:43.217 "zoned": false, 00:10:43.217 "supported_io_types": { 00:10:43.217 "read": true, 00:10:43.217 "write": true, 00:10:43.217 "unmap": true, 00:10:43.217 "flush": true, 00:10:43.217 "reset": true, 00:10:43.217 "nvme_admin": false, 00:10:43.217 "nvme_io": false, 00:10:43.217 "nvme_io_md": false, 00:10:43.217 "write_zeroes": true, 00:10:43.217 "zcopy": true, 00:10:43.217 "get_zone_info": false, 00:10:43.217 "zone_management": false, 00:10:43.217 "zone_append": false, 00:10:43.217 "compare": false, 00:10:43.217 "compare_and_write": false, 00:10:43.217 "abort": true, 00:10:43.217 "seek_hole": false, 00:10:43.217 "seek_data": false, 00:10:43.217 "copy": true, 00:10:43.217 "nvme_iov_md": false 00:10:43.217 }, 00:10:43.217 "memory_domains": [ 00:10:43.217 { 00:10:43.217 "dma_device_id": "system", 00:10:43.217 "dma_device_type": 1 00:10:43.217 }, 00:10:43.217 { 00:10:43.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.217 "dma_device_type": 2 00:10:43.217 } 00:10:43.217 ], 00:10:43.217 "driver_specific": {} 00:10:43.217 } 00:10:43.217 ] 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.217 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.218 "name": "Existed_Raid", 00:10:43.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.218 "strip_size_kb": 64, 00:10:43.218 "state": "configuring", 00:10:43.218 "raid_level": "concat", 00:10:43.218 "superblock": false, 00:10:43.218 "num_base_bdevs": 3, 00:10:43.218 "num_base_bdevs_discovered": 2, 00:10:43.218 "num_base_bdevs_operational": 3, 00:10:43.218 "base_bdevs_list": [ 00:10:43.218 { 00:10:43.218 "name": "BaseBdev1", 00:10:43.218 "uuid": "0b468515-cc2c-444c-8e72-813b134b6683", 00:10:43.218 "is_configured": true, 00:10:43.218 "data_offset": 0, 00:10:43.218 "data_size": 65536 00:10:43.218 }, 00:10:43.218 { 00:10:43.218 "name": null, 00:10:43.218 "uuid": "5f0e4838-14d0-41a5-9af3-56de9ffec697", 00:10:43.218 "is_configured": false, 00:10:43.218 "data_offset": 0, 00:10:43.218 "data_size": 65536 00:10:43.218 }, 00:10:43.218 { 00:10:43.218 "name": "BaseBdev3", 00:10:43.218 "uuid": "d7b7e846-64f2-44f8-aaae-44f76bca4dd4", 00:10:43.218 "is_configured": true, 00:10:43.218 "data_offset": 0, 00:10:43.218 "data_size": 65536 00:10:43.218 } 00:10:43.218 ] 00:10:43.218 }' 00:10:43.218 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.218 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.785 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.786 [2024-11-05 11:26:42.829889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.786 "name": "Existed_Raid", 00:10:43.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.786 "strip_size_kb": 64, 00:10:43.786 "state": "configuring", 00:10:43.786 "raid_level": "concat", 00:10:43.786 "superblock": false, 00:10:43.786 "num_base_bdevs": 3, 00:10:43.786 "num_base_bdevs_discovered": 1, 00:10:43.786 "num_base_bdevs_operational": 3, 00:10:43.786 "base_bdevs_list": [ 00:10:43.786 { 00:10:43.786 "name": "BaseBdev1", 00:10:43.786 "uuid": "0b468515-cc2c-444c-8e72-813b134b6683", 00:10:43.786 "is_configured": true, 00:10:43.786 "data_offset": 0, 00:10:43.786 "data_size": 65536 00:10:43.786 }, 00:10:43.786 { 00:10:43.786 "name": null, 00:10:43.786 "uuid": "5f0e4838-14d0-41a5-9af3-56de9ffec697", 00:10:43.786 "is_configured": false, 00:10:43.786 "data_offset": 0, 00:10:43.786 "data_size": 65536 00:10:43.786 }, 00:10:43.786 { 00:10:43.786 "name": null, 00:10:43.786 "uuid": "d7b7e846-64f2-44f8-aaae-44f76bca4dd4", 00:10:43.786 "is_configured": false, 00:10:43.786 "data_offset": 0, 00:10:43.786 "data_size": 65536 00:10:43.786 } 00:10:43.786 ] 00:10:43.786 }' 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.786 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.044 [2024-11-05 11:26:43.305145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.044 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.303 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.303 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.303 "name": "Existed_Raid", 00:10:44.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.303 "strip_size_kb": 64, 00:10:44.303 "state": "configuring", 00:10:44.303 "raid_level": "concat", 00:10:44.303 "superblock": false, 00:10:44.303 "num_base_bdevs": 3, 00:10:44.303 "num_base_bdevs_discovered": 2, 00:10:44.303 "num_base_bdevs_operational": 3, 00:10:44.303 "base_bdevs_list": [ 00:10:44.303 { 00:10:44.303 "name": "BaseBdev1", 00:10:44.303 "uuid": "0b468515-cc2c-444c-8e72-813b134b6683", 00:10:44.303 "is_configured": true, 00:10:44.303 "data_offset": 0, 00:10:44.303 "data_size": 65536 00:10:44.303 }, 00:10:44.303 { 00:10:44.303 "name": null, 00:10:44.303 "uuid": "5f0e4838-14d0-41a5-9af3-56de9ffec697", 00:10:44.303 "is_configured": false, 00:10:44.303 "data_offset": 0, 00:10:44.303 "data_size": 65536 00:10:44.303 }, 00:10:44.303 { 00:10:44.303 "name": "BaseBdev3", 00:10:44.303 "uuid": "d7b7e846-64f2-44f8-aaae-44f76bca4dd4", 00:10:44.303 "is_configured": true, 00:10:44.303 "data_offset": 0, 00:10:44.303 "data_size": 65536 00:10:44.303 } 00:10:44.303 ] 00:10:44.303 }' 00:10:44.303 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.303 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.563 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.563 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.563 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.563 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.563 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.563 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:44.563 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.563 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.563 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.563 [2024-11-05 11:26:43.776302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.823 "name": "Existed_Raid", 00:10:44.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.823 "strip_size_kb": 64, 00:10:44.823 "state": "configuring", 00:10:44.823 "raid_level": "concat", 00:10:44.823 "superblock": false, 00:10:44.823 "num_base_bdevs": 3, 00:10:44.823 "num_base_bdevs_discovered": 1, 00:10:44.823 "num_base_bdevs_operational": 3, 00:10:44.823 "base_bdevs_list": [ 00:10:44.823 { 00:10:44.823 "name": null, 00:10:44.823 "uuid": "0b468515-cc2c-444c-8e72-813b134b6683", 00:10:44.823 "is_configured": false, 00:10:44.823 "data_offset": 0, 00:10:44.823 "data_size": 65536 00:10:44.823 }, 00:10:44.823 { 00:10:44.823 "name": null, 00:10:44.823 "uuid": "5f0e4838-14d0-41a5-9af3-56de9ffec697", 00:10:44.823 "is_configured": false, 00:10:44.823 "data_offset": 0, 00:10:44.823 "data_size": 65536 00:10:44.823 }, 00:10:44.823 { 00:10:44.823 "name": "BaseBdev3", 00:10:44.823 "uuid": "d7b7e846-64f2-44f8-aaae-44f76bca4dd4", 00:10:44.823 "is_configured": true, 00:10:44.823 "data_offset": 0, 00:10:44.823 "data_size": 65536 00:10:44.823 } 00:10:44.823 ] 00:10:44.823 }' 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.823 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.082 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.082 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.082 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.082 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.082 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.082 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:45.082 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:45.082 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.082 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.082 [2024-11-05 11:26:44.316153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.082 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.082 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:45.083 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.083 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.083 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.083 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.083 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.083 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.083 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.083 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.083 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.083 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.083 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.083 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.083 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.083 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.341 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.341 "name": "Existed_Raid", 00:10:45.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.341 "strip_size_kb": 64, 00:10:45.341 "state": "configuring", 00:10:45.341 "raid_level": "concat", 00:10:45.341 "superblock": false, 00:10:45.341 "num_base_bdevs": 3, 00:10:45.341 "num_base_bdevs_discovered": 2, 00:10:45.341 "num_base_bdevs_operational": 3, 00:10:45.341 "base_bdevs_list": [ 00:10:45.341 { 00:10:45.341 "name": null, 00:10:45.341 "uuid": "0b468515-cc2c-444c-8e72-813b134b6683", 00:10:45.341 "is_configured": false, 00:10:45.341 "data_offset": 0, 00:10:45.341 "data_size": 65536 00:10:45.341 }, 00:10:45.341 { 00:10:45.341 "name": "BaseBdev2", 00:10:45.341 "uuid": "5f0e4838-14d0-41a5-9af3-56de9ffec697", 00:10:45.341 "is_configured": true, 00:10:45.341 "data_offset": 0, 00:10:45.341 "data_size": 65536 00:10:45.341 }, 00:10:45.341 { 00:10:45.341 "name": "BaseBdev3", 00:10:45.341 "uuid": "d7b7e846-64f2-44f8-aaae-44f76bca4dd4", 00:10:45.341 "is_configured": true, 00:10:45.341 "data_offset": 0, 00:10:45.341 "data_size": 65536 00:10:45.341 } 00:10:45.341 ] 00:10:45.341 }' 00:10:45.341 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.341 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.600 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.600 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.600 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.600 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.600 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0b468515-cc2c-444c-8e72-813b134b6683 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.601 [2024-11-05 11:26:44.844684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:45.601 [2024-11-05 11:26:44.844744] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:45.601 [2024-11-05 11:26:44.844755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:45.601 [2024-11-05 11:26:44.845020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:45.601 [2024-11-05 11:26:44.845223] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:45.601 [2024-11-05 11:26:44.845234] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:45.601 [2024-11-05 11:26:44.845518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.601 NewBaseBdev 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.601 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.861 [ 00:10:45.861 { 00:10:45.861 "name": "NewBaseBdev", 00:10:45.861 "aliases": [ 00:10:45.861 "0b468515-cc2c-444c-8e72-813b134b6683" 00:10:45.861 ], 00:10:45.861 "product_name": "Malloc disk", 00:10:45.861 "block_size": 512, 00:10:45.861 "num_blocks": 65536, 00:10:45.861 "uuid": "0b468515-cc2c-444c-8e72-813b134b6683", 00:10:45.861 "assigned_rate_limits": { 00:10:45.861 "rw_ios_per_sec": 0, 00:10:45.861 "rw_mbytes_per_sec": 0, 00:10:45.861 "r_mbytes_per_sec": 0, 00:10:45.861 "w_mbytes_per_sec": 0 00:10:45.861 }, 00:10:45.861 "claimed": true, 00:10:45.861 "claim_type": "exclusive_write", 00:10:45.861 "zoned": false, 00:10:45.861 "supported_io_types": { 00:10:45.861 "read": true, 00:10:45.861 "write": true, 00:10:45.861 "unmap": true, 00:10:45.861 "flush": true, 00:10:45.861 "reset": true, 00:10:45.861 "nvme_admin": false, 00:10:45.861 "nvme_io": false, 00:10:45.861 "nvme_io_md": false, 00:10:45.861 "write_zeroes": true, 00:10:45.861 "zcopy": true, 00:10:45.861 "get_zone_info": false, 00:10:45.861 "zone_management": false, 00:10:45.861 "zone_append": false, 00:10:45.861 "compare": false, 00:10:45.861 "compare_and_write": false, 00:10:45.861 "abort": true, 00:10:45.861 "seek_hole": false, 00:10:45.861 "seek_data": false, 00:10:45.861 "copy": true, 00:10:45.861 "nvme_iov_md": false 00:10:45.861 }, 00:10:45.861 "memory_domains": [ 00:10:45.861 { 00:10:45.861 "dma_device_id": "system", 00:10:45.861 "dma_device_type": 1 00:10:45.861 }, 00:10:45.861 { 00:10:45.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.861 "dma_device_type": 2 00:10:45.861 } 00:10:45.861 ], 00:10:45.861 "driver_specific": {} 00:10:45.861 } 00:10:45.861 ] 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.861 "name": "Existed_Raid", 00:10:45.861 "uuid": "5a3e2ab3-948a-4f84-8357-27ebf8bd56eb", 00:10:45.861 "strip_size_kb": 64, 00:10:45.861 "state": "online", 00:10:45.861 "raid_level": "concat", 00:10:45.861 "superblock": false, 00:10:45.861 "num_base_bdevs": 3, 00:10:45.861 "num_base_bdevs_discovered": 3, 00:10:45.861 "num_base_bdevs_operational": 3, 00:10:45.861 "base_bdevs_list": [ 00:10:45.861 { 00:10:45.861 "name": "NewBaseBdev", 00:10:45.861 "uuid": "0b468515-cc2c-444c-8e72-813b134b6683", 00:10:45.861 "is_configured": true, 00:10:45.861 "data_offset": 0, 00:10:45.861 "data_size": 65536 00:10:45.861 }, 00:10:45.861 { 00:10:45.861 "name": "BaseBdev2", 00:10:45.861 "uuid": "5f0e4838-14d0-41a5-9af3-56de9ffec697", 00:10:45.861 "is_configured": true, 00:10:45.861 "data_offset": 0, 00:10:45.861 "data_size": 65536 00:10:45.861 }, 00:10:45.861 { 00:10:45.861 "name": "BaseBdev3", 00:10:45.861 "uuid": "d7b7e846-64f2-44f8-aaae-44f76bca4dd4", 00:10:45.861 "is_configured": true, 00:10:45.861 "data_offset": 0, 00:10:45.861 "data_size": 65536 00:10:45.861 } 00:10:45.861 ] 00:10:45.861 }' 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.861 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.121 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:46.121 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:46.121 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.121 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.121 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.121 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.121 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.121 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:46.121 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.121 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.121 [2024-11-05 11:26:45.328232] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.121 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.121 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.121 "name": "Existed_Raid", 00:10:46.121 "aliases": [ 00:10:46.121 "5a3e2ab3-948a-4f84-8357-27ebf8bd56eb" 00:10:46.121 ], 00:10:46.121 "product_name": "Raid Volume", 00:10:46.121 "block_size": 512, 00:10:46.121 "num_blocks": 196608, 00:10:46.121 "uuid": "5a3e2ab3-948a-4f84-8357-27ebf8bd56eb", 00:10:46.121 "assigned_rate_limits": { 00:10:46.121 "rw_ios_per_sec": 0, 00:10:46.121 "rw_mbytes_per_sec": 0, 00:10:46.121 "r_mbytes_per_sec": 0, 00:10:46.121 "w_mbytes_per_sec": 0 00:10:46.121 }, 00:10:46.121 "claimed": false, 00:10:46.121 "zoned": false, 00:10:46.121 "supported_io_types": { 00:10:46.121 "read": true, 00:10:46.121 "write": true, 00:10:46.121 "unmap": true, 00:10:46.121 "flush": true, 00:10:46.121 "reset": true, 00:10:46.121 "nvme_admin": false, 00:10:46.121 "nvme_io": false, 00:10:46.121 "nvme_io_md": false, 00:10:46.121 "write_zeroes": true, 00:10:46.121 "zcopy": false, 00:10:46.121 "get_zone_info": false, 00:10:46.121 "zone_management": false, 00:10:46.121 "zone_append": false, 00:10:46.121 "compare": false, 00:10:46.121 "compare_and_write": false, 00:10:46.121 "abort": false, 00:10:46.121 "seek_hole": false, 00:10:46.121 "seek_data": false, 00:10:46.121 "copy": false, 00:10:46.121 "nvme_iov_md": false 00:10:46.121 }, 00:10:46.121 "memory_domains": [ 00:10:46.121 { 00:10:46.121 "dma_device_id": "system", 00:10:46.121 "dma_device_type": 1 00:10:46.121 }, 00:10:46.121 { 00:10:46.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.121 "dma_device_type": 2 00:10:46.121 }, 00:10:46.121 { 00:10:46.121 "dma_device_id": "system", 00:10:46.121 "dma_device_type": 1 00:10:46.121 }, 00:10:46.121 { 00:10:46.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.121 "dma_device_type": 2 00:10:46.121 }, 00:10:46.121 { 00:10:46.121 "dma_device_id": "system", 00:10:46.121 "dma_device_type": 1 00:10:46.121 }, 00:10:46.121 { 00:10:46.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.121 "dma_device_type": 2 00:10:46.121 } 00:10:46.121 ], 00:10:46.121 "driver_specific": { 00:10:46.121 "raid": { 00:10:46.121 "uuid": "5a3e2ab3-948a-4f84-8357-27ebf8bd56eb", 00:10:46.121 "strip_size_kb": 64, 00:10:46.121 "state": "online", 00:10:46.121 "raid_level": "concat", 00:10:46.121 "superblock": false, 00:10:46.121 "num_base_bdevs": 3, 00:10:46.121 "num_base_bdevs_discovered": 3, 00:10:46.121 "num_base_bdevs_operational": 3, 00:10:46.121 "base_bdevs_list": [ 00:10:46.121 { 00:10:46.121 "name": "NewBaseBdev", 00:10:46.121 "uuid": "0b468515-cc2c-444c-8e72-813b134b6683", 00:10:46.121 "is_configured": true, 00:10:46.121 "data_offset": 0, 00:10:46.121 "data_size": 65536 00:10:46.121 }, 00:10:46.121 { 00:10:46.121 "name": "BaseBdev2", 00:10:46.121 "uuid": "5f0e4838-14d0-41a5-9af3-56de9ffec697", 00:10:46.121 "is_configured": true, 00:10:46.121 "data_offset": 0, 00:10:46.121 "data_size": 65536 00:10:46.121 }, 00:10:46.121 { 00:10:46.121 "name": "BaseBdev3", 00:10:46.121 "uuid": "d7b7e846-64f2-44f8-aaae-44f76bca4dd4", 00:10:46.121 "is_configured": true, 00:10:46.121 "data_offset": 0, 00:10:46.121 "data_size": 65536 00:10:46.121 } 00:10:46.121 ] 00:10:46.121 } 00:10:46.121 } 00:10:46.121 }' 00:10:46.121 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:46.380 BaseBdev2 00:10:46.380 BaseBdev3' 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.380 [2024-11-05 11:26:45.627407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.380 [2024-11-05 11:26:45.627444] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.380 [2024-11-05 11:26:45.627541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.380 [2024-11-05 11:26:45.627597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.380 [2024-11-05 11:26:45.627609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65739 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65739 ']' 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65739 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:46.380 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65739 00:10:46.640 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:46.640 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:46.640 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65739' 00:10:46.640 killing process with pid 65739 00:10:46.640 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65739 00:10:46.640 [2024-11-05 11:26:45.670900] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.640 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65739 00:10:46.898 [2024-11-05 11:26:45.974785] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.837 11:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:47.837 00:10:47.837 real 0m10.420s 00:10:47.837 user 0m16.475s 00:10:47.837 sys 0m1.905s 00:10:47.837 11:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:47.837 11:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.837 ************************************ 00:10:47.837 END TEST raid_state_function_test 00:10:47.837 ************************************ 00:10:48.102 11:26:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:48.102 11:26:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:48.102 11:26:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:48.102 11:26:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.102 ************************************ 00:10:48.102 START TEST raid_state_function_test_sb 00:10:48.102 ************************************ 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:48.102 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66360 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66360' 00:10:48.103 Process raid pid: 66360 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66360 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66360 ']' 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:48.103 11:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.103 [2024-11-05 11:26:47.263042] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:10:48.103 [2024-11-05 11:26:47.263265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.361 [2024-11-05 11:26:47.418289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.361 [2024-11-05 11:26:47.542279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.619 [2024-11-05 11:26:47.743050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.619 [2024-11-05 11:26:47.743097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.187 [2024-11-05 11:26:48.175264] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.187 [2024-11-05 11:26:48.175342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.187 [2024-11-05 11:26:48.175355] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.187 [2024-11-05 11:26:48.175367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.187 [2024-11-05 11:26:48.175375] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.187 [2024-11-05 11:26:48.175385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.187 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.188 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.188 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.188 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.188 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.188 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.188 "name": "Existed_Raid", 00:10:49.188 "uuid": "08e48dc9-8ab0-496a-96fc-4abcbd69ff91", 00:10:49.188 "strip_size_kb": 64, 00:10:49.188 "state": "configuring", 00:10:49.188 "raid_level": "concat", 00:10:49.188 "superblock": true, 00:10:49.188 "num_base_bdevs": 3, 00:10:49.188 "num_base_bdevs_discovered": 0, 00:10:49.188 "num_base_bdevs_operational": 3, 00:10:49.188 "base_bdevs_list": [ 00:10:49.188 { 00:10:49.188 "name": "BaseBdev1", 00:10:49.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.188 "is_configured": false, 00:10:49.188 "data_offset": 0, 00:10:49.188 "data_size": 0 00:10:49.188 }, 00:10:49.188 { 00:10:49.188 "name": "BaseBdev2", 00:10:49.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.188 "is_configured": false, 00:10:49.188 "data_offset": 0, 00:10:49.188 "data_size": 0 00:10:49.188 }, 00:10:49.188 { 00:10:49.188 "name": "BaseBdev3", 00:10:49.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.188 "is_configured": false, 00:10:49.188 "data_offset": 0, 00:10:49.188 "data_size": 0 00:10:49.188 } 00:10:49.188 ] 00:10:49.188 }' 00:10:49.188 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.188 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.447 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.447 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.447 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.447 [2024-11-05 11:26:48.678312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.447 [2024-11-05 11:26:48.678404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:49.447 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.447 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:49.447 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.447 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.447 [2024-11-05 11:26:48.690302] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.447 [2024-11-05 11:26:48.690386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.447 [2024-11-05 11:26:48.690413] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.447 [2024-11-05 11:26:48.690436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.447 [2024-11-05 11:26:48.690453] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.447 [2024-11-05 11:26:48.690473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.447 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.447 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:49.447 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.447 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.707 [2024-11-05 11:26:48.738325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.707 BaseBdev1 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.707 [ 00:10:49.707 { 00:10:49.707 "name": "BaseBdev1", 00:10:49.707 "aliases": [ 00:10:49.707 "c60adbe9-1108-43f0-9caf-cedbecad49f6" 00:10:49.707 ], 00:10:49.707 "product_name": "Malloc disk", 00:10:49.707 "block_size": 512, 00:10:49.707 "num_blocks": 65536, 00:10:49.707 "uuid": "c60adbe9-1108-43f0-9caf-cedbecad49f6", 00:10:49.707 "assigned_rate_limits": { 00:10:49.707 "rw_ios_per_sec": 0, 00:10:49.707 "rw_mbytes_per_sec": 0, 00:10:49.707 "r_mbytes_per_sec": 0, 00:10:49.707 "w_mbytes_per_sec": 0 00:10:49.707 }, 00:10:49.707 "claimed": true, 00:10:49.707 "claim_type": "exclusive_write", 00:10:49.707 "zoned": false, 00:10:49.707 "supported_io_types": { 00:10:49.707 "read": true, 00:10:49.707 "write": true, 00:10:49.707 "unmap": true, 00:10:49.707 "flush": true, 00:10:49.707 "reset": true, 00:10:49.707 "nvme_admin": false, 00:10:49.707 "nvme_io": false, 00:10:49.707 "nvme_io_md": false, 00:10:49.707 "write_zeroes": true, 00:10:49.707 "zcopy": true, 00:10:49.707 "get_zone_info": false, 00:10:49.707 "zone_management": false, 00:10:49.707 "zone_append": false, 00:10:49.707 "compare": false, 00:10:49.707 "compare_and_write": false, 00:10:49.707 "abort": true, 00:10:49.707 "seek_hole": false, 00:10:49.707 "seek_data": false, 00:10:49.707 "copy": true, 00:10:49.707 "nvme_iov_md": false 00:10:49.707 }, 00:10:49.707 "memory_domains": [ 00:10:49.707 { 00:10:49.707 "dma_device_id": "system", 00:10:49.707 "dma_device_type": 1 00:10:49.707 }, 00:10:49.707 { 00:10:49.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.707 "dma_device_type": 2 00:10:49.707 } 00:10:49.707 ], 00:10:49.707 "driver_specific": {} 00:10:49.707 } 00:10:49.707 ] 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.707 "name": "Existed_Raid", 00:10:49.707 "uuid": "147a918e-9e98-469d-80c7-63141869db1e", 00:10:49.707 "strip_size_kb": 64, 00:10:49.707 "state": "configuring", 00:10:49.707 "raid_level": "concat", 00:10:49.707 "superblock": true, 00:10:49.707 "num_base_bdevs": 3, 00:10:49.707 "num_base_bdevs_discovered": 1, 00:10:49.707 "num_base_bdevs_operational": 3, 00:10:49.707 "base_bdevs_list": [ 00:10:49.707 { 00:10:49.707 "name": "BaseBdev1", 00:10:49.707 "uuid": "c60adbe9-1108-43f0-9caf-cedbecad49f6", 00:10:49.707 "is_configured": true, 00:10:49.707 "data_offset": 2048, 00:10:49.707 "data_size": 63488 00:10:49.707 }, 00:10:49.707 { 00:10:49.707 "name": "BaseBdev2", 00:10:49.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.707 "is_configured": false, 00:10:49.707 "data_offset": 0, 00:10:49.707 "data_size": 0 00:10:49.707 }, 00:10:49.707 { 00:10:49.707 "name": "BaseBdev3", 00:10:49.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.707 "is_configured": false, 00:10:49.707 "data_offset": 0, 00:10:49.707 "data_size": 0 00:10:49.707 } 00:10:49.707 ] 00:10:49.707 }' 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.707 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.966 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.966 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.966 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.966 [2024-11-05 11:26:49.181666] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.966 [2024-11-05 11:26:49.181789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:49.966 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.966 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.967 [2024-11-05 11:26:49.189688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.967 [2024-11-05 11:26:49.191482] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.967 [2024-11-05 11:26:49.191573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.967 [2024-11-05 11:26:49.191588] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.967 [2024-11-05 11:26:49.191598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.967 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.226 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.226 "name": "Existed_Raid", 00:10:50.226 "uuid": "9d77cffc-49c0-4493-bee1-7c8326ee4171", 00:10:50.226 "strip_size_kb": 64, 00:10:50.226 "state": "configuring", 00:10:50.226 "raid_level": "concat", 00:10:50.226 "superblock": true, 00:10:50.226 "num_base_bdevs": 3, 00:10:50.226 "num_base_bdevs_discovered": 1, 00:10:50.226 "num_base_bdevs_operational": 3, 00:10:50.226 "base_bdevs_list": [ 00:10:50.226 { 00:10:50.226 "name": "BaseBdev1", 00:10:50.226 "uuid": "c60adbe9-1108-43f0-9caf-cedbecad49f6", 00:10:50.226 "is_configured": true, 00:10:50.226 "data_offset": 2048, 00:10:50.226 "data_size": 63488 00:10:50.226 }, 00:10:50.226 { 00:10:50.226 "name": "BaseBdev2", 00:10:50.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.226 "is_configured": false, 00:10:50.226 "data_offset": 0, 00:10:50.226 "data_size": 0 00:10:50.226 }, 00:10:50.226 { 00:10:50.226 "name": "BaseBdev3", 00:10:50.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.226 "is_configured": false, 00:10:50.226 "data_offset": 0, 00:10:50.226 "data_size": 0 00:10:50.226 } 00:10:50.226 ] 00:10:50.226 }' 00:10:50.226 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.226 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.485 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:50.485 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.485 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.486 [2024-11-05 11:26:49.681955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.486 BaseBdev2 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.486 [ 00:10:50.486 { 00:10:50.486 "name": "BaseBdev2", 00:10:50.486 "aliases": [ 00:10:50.486 "b7d140a5-742b-4f1d-9ab0-e586937f91b7" 00:10:50.486 ], 00:10:50.486 "product_name": "Malloc disk", 00:10:50.486 "block_size": 512, 00:10:50.486 "num_blocks": 65536, 00:10:50.486 "uuid": "b7d140a5-742b-4f1d-9ab0-e586937f91b7", 00:10:50.486 "assigned_rate_limits": { 00:10:50.486 "rw_ios_per_sec": 0, 00:10:50.486 "rw_mbytes_per_sec": 0, 00:10:50.486 "r_mbytes_per_sec": 0, 00:10:50.486 "w_mbytes_per_sec": 0 00:10:50.486 }, 00:10:50.486 "claimed": true, 00:10:50.486 "claim_type": "exclusive_write", 00:10:50.486 "zoned": false, 00:10:50.486 "supported_io_types": { 00:10:50.486 "read": true, 00:10:50.486 "write": true, 00:10:50.486 "unmap": true, 00:10:50.486 "flush": true, 00:10:50.486 "reset": true, 00:10:50.486 "nvme_admin": false, 00:10:50.486 "nvme_io": false, 00:10:50.486 "nvme_io_md": false, 00:10:50.486 "write_zeroes": true, 00:10:50.486 "zcopy": true, 00:10:50.486 "get_zone_info": false, 00:10:50.486 "zone_management": false, 00:10:50.486 "zone_append": false, 00:10:50.486 "compare": false, 00:10:50.486 "compare_and_write": false, 00:10:50.486 "abort": true, 00:10:50.486 "seek_hole": false, 00:10:50.486 "seek_data": false, 00:10:50.486 "copy": true, 00:10:50.486 "nvme_iov_md": false 00:10:50.486 }, 00:10:50.486 "memory_domains": [ 00:10:50.486 { 00:10:50.486 "dma_device_id": "system", 00:10:50.486 "dma_device_type": 1 00:10:50.486 }, 00:10:50.486 { 00:10:50.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.486 "dma_device_type": 2 00:10:50.486 } 00:10:50.486 ], 00:10:50.486 "driver_specific": {} 00:10:50.486 } 00:10:50.486 ] 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.486 "name": "Existed_Raid", 00:10:50.486 "uuid": "9d77cffc-49c0-4493-bee1-7c8326ee4171", 00:10:50.486 "strip_size_kb": 64, 00:10:50.486 "state": "configuring", 00:10:50.486 "raid_level": "concat", 00:10:50.486 "superblock": true, 00:10:50.486 "num_base_bdevs": 3, 00:10:50.486 "num_base_bdevs_discovered": 2, 00:10:50.486 "num_base_bdevs_operational": 3, 00:10:50.486 "base_bdevs_list": [ 00:10:50.486 { 00:10:50.486 "name": "BaseBdev1", 00:10:50.486 "uuid": "c60adbe9-1108-43f0-9caf-cedbecad49f6", 00:10:50.486 "is_configured": true, 00:10:50.486 "data_offset": 2048, 00:10:50.486 "data_size": 63488 00:10:50.486 }, 00:10:50.486 { 00:10:50.486 "name": "BaseBdev2", 00:10:50.486 "uuid": "b7d140a5-742b-4f1d-9ab0-e586937f91b7", 00:10:50.486 "is_configured": true, 00:10:50.486 "data_offset": 2048, 00:10:50.486 "data_size": 63488 00:10:50.486 }, 00:10:50.486 { 00:10:50.486 "name": "BaseBdev3", 00:10:50.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.486 "is_configured": false, 00:10:50.486 "data_offset": 0, 00:10:50.486 "data_size": 0 00:10:50.486 } 00:10:50.486 ] 00:10:50.486 }' 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.486 11:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.055 [2024-11-05 11:26:50.195705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.055 [2024-11-05 11:26:50.195975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:51.055 [2024-11-05 11:26:50.196001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:51.055 [2024-11-05 11:26:50.196336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:51.055 [2024-11-05 11:26:50.196517] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:51.055 [2024-11-05 11:26:50.196530] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:51.055 BaseBdev3 00:10:51.055 [2024-11-05 11:26:50.196684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.055 [ 00:10:51.055 { 00:10:51.055 "name": "BaseBdev3", 00:10:51.055 "aliases": [ 00:10:51.055 "35ef1dc8-5d45-4cca-87f2-417510404454" 00:10:51.055 ], 00:10:51.055 "product_name": "Malloc disk", 00:10:51.055 "block_size": 512, 00:10:51.055 "num_blocks": 65536, 00:10:51.055 "uuid": "35ef1dc8-5d45-4cca-87f2-417510404454", 00:10:51.055 "assigned_rate_limits": { 00:10:51.055 "rw_ios_per_sec": 0, 00:10:51.055 "rw_mbytes_per_sec": 0, 00:10:51.055 "r_mbytes_per_sec": 0, 00:10:51.055 "w_mbytes_per_sec": 0 00:10:51.055 }, 00:10:51.055 "claimed": true, 00:10:51.055 "claim_type": "exclusive_write", 00:10:51.055 "zoned": false, 00:10:51.055 "supported_io_types": { 00:10:51.055 "read": true, 00:10:51.055 "write": true, 00:10:51.055 "unmap": true, 00:10:51.055 "flush": true, 00:10:51.055 "reset": true, 00:10:51.055 "nvme_admin": false, 00:10:51.055 "nvme_io": false, 00:10:51.055 "nvme_io_md": false, 00:10:51.055 "write_zeroes": true, 00:10:51.055 "zcopy": true, 00:10:51.055 "get_zone_info": false, 00:10:51.055 "zone_management": false, 00:10:51.055 "zone_append": false, 00:10:51.055 "compare": false, 00:10:51.055 "compare_and_write": false, 00:10:51.055 "abort": true, 00:10:51.055 "seek_hole": false, 00:10:51.055 "seek_data": false, 00:10:51.055 "copy": true, 00:10:51.055 "nvme_iov_md": false 00:10:51.055 }, 00:10:51.055 "memory_domains": [ 00:10:51.055 { 00:10:51.055 "dma_device_id": "system", 00:10:51.055 "dma_device_type": 1 00:10:51.055 }, 00:10:51.055 { 00:10:51.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.055 "dma_device_type": 2 00:10:51.055 } 00:10:51.055 ], 00:10:51.055 "driver_specific": {} 00:10:51.055 } 00:10:51.055 ] 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.055 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.055 "name": "Existed_Raid", 00:10:51.055 "uuid": "9d77cffc-49c0-4493-bee1-7c8326ee4171", 00:10:51.055 "strip_size_kb": 64, 00:10:51.055 "state": "online", 00:10:51.055 "raid_level": "concat", 00:10:51.055 "superblock": true, 00:10:51.055 "num_base_bdevs": 3, 00:10:51.055 "num_base_bdevs_discovered": 3, 00:10:51.055 "num_base_bdevs_operational": 3, 00:10:51.055 "base_bdevs_list": [ 00:10:51.055 { 00:10:51.055 "name": "BaseBdev1", 00:10:51.055 "uuid": "c60adbe9-1108-43f0-9caf-cedbecad49f6", 00:10:51.055 "is_configured": true, 00:10:51.055 "data_offset": 2048, 00:10:51.055 "data_size": 63488 00:10:51.055 }, 00:10:51.055 { 00:10:51.055 "name": "BaseBdev2", 00:10:51.055 "uuid": "b7d140a5-742b-4f1d-9ab0-e586937f91b7", 00:10:51.055 "is_configured": true, 00:10:51.055 "data_offset": 2048, 00:10:51.055 "data_size": 63488 00:10:51.055 }, 00:10:51.056 { 00:10:51.056 "name": "BaseBdev3", 00:10:51.056 "uuid": "35ef1dc8-5d45-4cca-87f2-417510404454", 00:10:51.056 "is_configured": true, 00:10:51.056 "data_offset": 2048, 00:10:51.056 "data_size": 63488 00:10:51.056 } 00:10:51.056 ] 00:10:51.056 }' 00:10:51.056 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.056 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.624 [2024-11-05 11:26:50.727225] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.624 "name": "Existed_Raid", 00:10:51.624 "aliases": [ 00:10:51.624 "9d77cffc-49c0-4493-bee1-7c8326ee4171" 00:10:51.624 ], 00:10:51.624 "product_name": "Raid Volume", 00:10:51.624 "block_size": 512, 00:10:51.624 "num_blocks": 190464, 00:10:51.624 "uuid": "9d77cffc-49c0-4493-bee1-7c8326ee4171", 00:10:51.624 "assigned_rate_limits": { 00:10:51.624 "rw_ios_per_sec": 0, 00:10:51.624 "rw_mbytes_per_sec": 0, 00:10:51.624 "r_mbytes_per_sec": 0, 00:10:51.624 "w_mbytes_per_sec": 0 00:10:51.624 }, 00:10:51.624 "claimed": false, 00:10:51.624 "zoned": false, 00:10:51.624 "supported_io_types": { 00:10:51.624 "read": true, 00:10:51.624 "write": true, 00:10:51.624 "unmap": true, 00:10:51.624 "flush": true, 00:10:51.624 "reset": true, 00:10:51.624 "nvme_admin": false, 00:10:51.624 "nvme_io": false, 00:10:51.624 "nvme_io_md": false, 00:10:51.624 "write_zeroes": true, 00:10:51.624 "zcopy": false, 00:10:51.624 "get_zone_info": false, 00:10:51.624 "zone_management": false, 00:10:51.624 "zone_append": false, 00:10:51.624 "compare": false, 00:10:51.624 "compare_and_write": false, 00:10:51.624 "abort": false, 00:10:51.624 "seek_hole": false, 00:10:51.624 "seek_data": false, 00:10:51.624 "copy": false, 00:10:51.624 "nvme_iov_md": false 00:10:51.624 }, 00:10:51.624 "memory_domains": [ 00:10:51.624 { 00:10:51.624 "dma_device_id": "system", 00:10:51.624 "dma_device_type": 1 00:10:51.624 }, 00:10:51.624 { 00:10:51.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.624 "dma_device_type": 2 00:10:51.624 }, 00:10:51.624 { 00:10:51.624 "dma_device_id": "system", 00:10:51.624 "dma_device_type": 1 00:10:51.624 }, 00:10:51.624 { 00:10:51.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.624 "dma_device_type": 2 00:10:51.624 }, 00:10:51.624 { 00:10:51.624 "dma_device_id": "system", 00:10:51.624 "dma_device_type": 1 00:10:51.624 }, 00:10:51.624 { 00:10:51.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.624 "dma_device_type": 2 00:10:51.624 } 00:10:51.624 ], 00:10:51.624 "driver_specific": { 00:10:51.624 "raid": { 00:10:51.624 "uuid": "9d77cffc-49c0-4493-bee1-7c8326ee4171", 00:10:51.624 "strip_size_kb": 64, 00:10:51.624 "state": "online", 00:10:51.624 "raid_level": "concat", 00:10:51.624 "superblock": true, 00:10:51.624 "num_base_bdevs": 3, 00:10:51.624 "num_base_bdevs_discovered": 3, 00:10:51.624 "num_base_bdevs_operational": 3, 00:10:51.624 "base_bdevs_list": [ 00:10:51.624 { 00:10:51.624 "name": "BaseBdev1", 00:10:51.624 "uuid": "c60adbe9-1108-43f0-9caf-cedbecad49f6", 00:10:51.624 "is_configured": true, 00:10:51.624 "data_offset": 2048, 00:10:51.624 "data_size": 63488 00:10:51.624 }, 00:10:51.624 { 00:10:51.624 "name": "BaseBdev2", 00:10:51.624 "uuid": "b7d140a5-742b-4f1d-9ab0-e586937f91b7", 00:10:51.624 "is_configured": true, 00:10:51.624 "data_offset": 2048, 00:10:51.624 "data_size": 63488 00:10:51.624 }, 00:10:51.624 { 00:10:51.624 "name": "BaseBdev3", 00:10:51.624 "uuid": "35ef1dc8-5d45-4cca-87f2-417510404454", 00:10:51.624 "is_configured": true, 00:10:51.624 "data_offset": 2048, 00:10:51.624 "data_size": 63488 00:10:51.624 } 00:10:51.624 ] 00:10:51.624 } 00:10:51.624 } 00:10:51.624 }' 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:51.624 BaseBdev2 00:10:51.624 BaseBdev3' 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.624 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.884 11:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.884 [2024-11-05 11:26:50.994454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:51.884 [2024-11-05 11:26:50.994496] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.884 [2024-11-05 11:26:50.994553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.884 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.885 "name": "Existed_Raid", 00:10:51.885 "uuid": "9d77cffc-49c0-4493-bee1-7c8326ee4171", 00:10:51.885 "strip_size_kb": 64, 00:10:51.885 "state": "offline", 00:10:51.885 "raid_level": "concat", 00:10:51.885 "superblock": true, 00:10:51.885 "num_base_bdevs": 3, 00:10:51.885 "num_base_bdevs_discovered": 2, 00:10:51.885 "num_base_bdevs_operational": 2, 00:10:51.885 "base_bdevs_list": [ 00:10:51.885 { 00:10:51.885 "name": null, 00:10:51.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.885 "is_configured": false, 00:10:51.885 "data_offset": 0, 00:10:51.885 "data_size": 63488 00:10:51.885 }, 00:10:51.885 { 00:10:51.885 "name": "BaseBdev2", 00:10:51.885 "uuid": "b7d140a5-742b-4f1d-9ab0-e586937f91b7", 00:10:51.885 "is_configured": true, 00:10:51.885 "data_offset": 2048, 00:10:51.885 "data_size": 63488 00:10:51.885 }, 00:10:51.885 { 00:10:51.885 "name": "BaseBdev3", 00:10:51.885 "uuid": "35ef1dc8-5d45-4cca-87f2-417510404454", 00:10:51.885 "is_configured": true, 00:10:51.885 "data_offset": 2048, 00:10:51.885 "data_size": 63488 00:10:51.885 } 00:10:51.885 ] 00:10:51.885 }' 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.885 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.451 [2024-11-05 11:26:51.584371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.451 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.452 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.452 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.452 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.452 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.711 [2024-11-05 11:26:51.740400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:52.711 [2024-11-05 11:26:51.740522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.711 BaseBdev2 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:52.711 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.712 [ 00:10:52.712 { 00:10:52.712 "name": "BaseBdev2", 00:10:52.712 "aliases": [ 00:10:52.712 "717846c8-2192-4831-a002-a7b700cb5f85" 00:10:52.712 ], 00:10:52.712 "product_name": "Malloc disk", 00:10:52.712 "block_size": 512, 00:10:52.712 "num_blocks": 65536, 00:10:52.712 "uuid": "717846c8-2192-4831-a002-a7b700cb5f85", 00:10:52.712 "assigned_rate_limits": { 00:10:52.712 "rw_ios_per_sec": 0, 00:10:52.712 "rw_mbytes_per_sec": 0, 00:10:52.712 "r_mbytes_per_sec": 0, 00:10:52.712 "w_mbytes_per_sec": 0 00:10:52.712 }, 00:10:52.712 "claimed": false, 00:10:52.712 "zoned": false, 00:10:52.712 "supported_io_types": { 00:10:52.712 "read": true, 00:10:52.712 "write": true, 00:10:52.712 "unmap": true, 00:10:52.712 "flush": true, 00:10:52.712 "reset": true, 00:10:52.712 "nvme_admin": false, 00:10:52.712 "nvme_io": false, 00:10:52.712 "nvme_io_md": false, 00:10:52.712 "write_zeroes": true, 00:10:52.712 "zcopy": true, 00:10:52.712 "get_zone_info": false, 00:10:52.712 "zone_management": false, 00:10:52.712 "zone_append": false, 00:10:52.712 "compare": false, 00:10:52.712 "compare_and_write": false, 00:10:52.712 "abort": true, 00:10:52.712 "seek_hole": false, 00:10:52.712 "seek_data": false, 00:10:52.712 "copy": true, 00:10:52.712 "nvme_iov_md": false 00:10:52.712 }, 00:10:52.712 "memory_domains": [ 00:10:52.712 { 00:10:52.712 "dma_device_id": "system", 00:10:52.712 "dma_device_type": 1 00:10:52.712 }, 00:10:52.712 { 00:10:52.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.712 "dma_device_type": 2 00:10:52.712 } 00:10:52.712 ], 00:10:52.712 "driver_specific": {} 00:10:52.712 } 00:10:52.712 ] 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.712 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.972 BaseBdev3 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.972 [ 00:10:52.972 { 00:10:52.972 "name": "BaseBdev3", 00:10:52.972 "aliases": [ 00:10:52.972 "f1c1f6ba-c588-490a-b106-9b75e649e509" 00:10:52.972 ], 00:10:52.972 "product_name": "Malloc disk", 00:10:52.972 "block_size": 512, 00:10:52.972 "num_blocks": 65536, 00:10:52.972 "uuid": "f1c1f6ba-c588-490a-b106-9b75e649e509", 00:10:52.972 "assigned_rate_limits": { 00:10:52.972 "rw_ios_per_sec": 0, 00:10:52.972 "rw_mbytes_per_sec": 0, 00:10:52.972 "r_mbytes_per_sec": 0, 00:10:52.972 "w_mbytes_per_sec": 0 00:10:52.972 }, 00:10:52.972 "claimed": false, 00:10:52.972 "zoned": false, 00:10:52.972 "supported_io_types": { 00:10:52.972 "read": true, 00:10:52.972 "write": true, 00:10:52.972 "unmap": true, 00:10:52.972 "flush": true, 00:10:52.972 "reset": true, 00:10:52.972 "nvme_admin": false, 00:10:52.972 "nvme_io": false, 00:10:52.972 "nvme_io_md": false, 00:10:52.972 "write_zeroes": true, 00:10:52.972 "zcopy": true, 00:10:52.972 "get_zone_info": false, 00:10:52.972 "zone_management": false, 00:10:52.972 "zone_append": false, 00:10:52.972 "compare": false, 00:10:52.972 "compare_and_write": false, 00:10:52.972 "abort": true, 00:10:52.972 "seek_hole": false, 00:10:52.972 "seek_data": false, 00:10:52.972 "copy": true, 00:10:52.972 "nvme_iov_md": false 00:10:52.972 }, 00:10:52.972 "memory_domains": [ 00:10:52.972 { 00:10:52.972 "dma_device_id": "system", 00:10:52.972 "dma_device_type": 1 00:10:52.972 }, 00:10:52.972 { 00:10:52.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.972 "dma_device_type": 2 00:10:52.972 } 00:10:52.972 ], 00:10:52.972 "driver_specific": {} 00:10:52.972 } 00:10:52.972 ] 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.972 [2024-11-05 11:26:52.071397] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.972 [2024-11-05 11:26:52.071485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.972 [2024-11-05 11:26:52.071531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.972 [2024-11-05 11:26:52.073326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.972 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.973 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.973 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.973 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.973 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.973 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.973 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.973 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.973 "name": "Existed_Raid", 00:10:52.973 "uuid": "1ab1fa69-8210-4fba-8d39-42e1dc58b6b4", 00:10:52.973 "strip_size_kb": 64, 00:10:52.973 "state": "configuring", 00:10:52.973 "raid_level": "concat", 00:10:52.973 "superblock": true, 00:10:52.973 "num_base_bdevs": 3, 00:10:52.973 "num_base_bdevs_discovered": 2, 00:10:52.973 "num_base_bdevs_operational": 3, 00:10:52.973 "base_bdevs_list": [ 00:10:52.973 { 00:10:52.973 "name": "BaseBdev1", 00:10:52.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.973 "is_configured": false, 00:10:52.973 "data_offset": 0, 00:10:52.973 "data_size": 0 00:10:52.973 }, 00:10:52.973 { 00:10:52.973 "name": "BaseBdev2", 00:10:52.973 "uuid": "717846c8-2192-4831-a002-a7b700cb5f85", 00:10:52.973 "is_configured": true, 00:10:52.973 "data_offset": 2048, 00:10:52.973 "data_size": 63488 00:10:52.973 }, 00:10:52.973 { 00:10:52.973 "name": "BaseBdev3", 00:10:52.973 "uuid": "f1c1f6ba-c588-490a-b106-9b75e649e509", 00:10:52.973 "is_configured": true, 00:10:52.973 "data_offset": 2048, 00:10:52.973 "data_size": 63488 00:10:52.973 } 00:10:52.973 ] 00:10:52.973 }' 00:10:52.973 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.973 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.549 [2024-11-05 11:26:52.594534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.549 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.549 "name": "Existed_Raid", 00:10:53.549 "uuid": "1ab1fa69-8210-4fba-8d39-42e1dc58b6b4", 00:10:53.549 "strip_size_kb": 64, 00:10:53.549 "state": "configuring", 00:10:53.549 "raid_level": "concat", 00:10:53.550 "superblock": true, 00:10:53.550 "num_base_bdevs": 3, 00:10:53.550 "num_base_bdevs_discovered": 1, 00:10:53.550 "num_base_bdevs_operational": 3, 00:10:53.550 "base_bdevs_list": [ 00:10:53.550 { 00:10:53.550 "name": "BaseBdev1", 00:10:53.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.550 "is_configured": false, 00:10:53.550 "data_offset": 0, 00:10:53.550 "data_size": 0 00:10:53.550 }, 00:10:53.550 { 00:10:53.550 "name": null, 00:10:53.550 "uuid": "717846c8-2192-4831-a002-a7b700cb5f85", 00:10:53.550 "is_configured": false, 00:10:53.550 "data_offset": 0, 00:10:53.550 "data_size": 63488 00:10:53.550 }, 00:10:53.550 { 00:10:53.550 "name": "BaseBdev3", 00:10:53.550 "uuid": "f1c1f6ba-c588-490a-b106-9b75e649e509", 00:10:53.550 "is_configured": true, 00:10:53.550 "data_offset": 2048, 00:10:53.550 "data_size": 63488 00:10:53.550 } 00:10:53.550 ] 00:10:53.550 }' 00:10:53.550 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.550 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.808 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:53.808 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.808 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.808 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.808 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.808 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:53.808 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:53.808 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.808 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.068 [2024-11-05 11:26:53.087419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.068 BaseBdev1 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.068 [ 00:10:54.068 { 00:10:54.068 "name": "BaseBdev1", 00:10:54.068 "aliases": [ 00:10:54.068 "577323fd-2da7-48a0-b224-fc7d595c047b" 00:10:54.068 ], 00:10:54.068 "product_name": "Malloc disk", 00:10:54.068 "block_size": 512, 00:10:54.068 "num_blocks": 65536, 00:10:54.068 "uuid": "577323fd-2da7-48a0-b224-fc7d595c047b", 00:10:54.068 "assigned_rate_limits": { 00:10:54.068 "rw_ios_per_sec": 0, 00:10:54.068 "rw_mbytes_per_sec": 0, 00:10:54.068 "r_mbytes_per_sec": 0, 00:10:54.068 "w_mbytes_per_sec": 0 00:10:54.068 }, 00:10:54.068 "claimed": true, 00:10:54.068 "claim_type": "exclusive_write", 00:10:54.068 "zoned": false, 00:10:54.068 "supported_io_types": { 00:10:54.068 "read": true, 00:10:54.068 "write": true, 00:10:54.068 "unmap": true, 00:10:54.068 "flush": true, 00:10:54.068 "reset": true, 00:10:54.068 "nvme_admin": false, 00:10:54.068 "nvme_io": false, 00:10:54.068 "nvme_io_md": false, 00:10:54.068 "write_zeroes": true, 00:10:54.068 "zcopy": true, 00:10:54.068 "get_zone_info": false, 00:10:54.068 "zone_management": false, 00:10:54.068 "zone_append": false, 00:10:54.068 "compare": false, 00:10:54.068 "compare_and_write": false, 00:10:54.068 "abort": true, 00:10:54.068 "seek_hole": false, 00:10:54.068 "seek_data": false, 00:10:54.068 "copy": true, 00:10:54.068 "nvme_iov_md": false 00:10:54.068 }, 00:10:54.068 "memory_domains": [ 00:10:54.068 { 00:10:54.068 "dma_device_id": "system", 00:10:54.068 "dma_device_type": 1 00:10:54.068 }, 00:10:54.068 { 00:10:54.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.068 "dma_device_type": 2 00:10:54.068 } 00:10:54.068 ], 00:10:54.068 "driver_specific": {} 00:10:54.068 } 00:10:54.068 ] 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.068 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.068 "name": "Existed_Raid", 00:10:54.068 "uuid": "1ab1fa69-8210-4fba-8d39-42e1dc58b6b4", 00:10:54.068 "strip_size_kb": 64, 00:10:54.068 "state": "configuring", 00:10:54.068 "raid_level": "concat", 00:10:54.068 "superblock": true, 00:10:54.068 "num_base_bdevs": 3, 00:10:54.068 "num_base_bdevs_discovered": 2, 00:10:54.068 "num_base_bdevs_operational": 3, 00:10:54.068 "base_bdevs_list": [ 00:10:54.068 { 00:10:54.068 "name": "BaseBdev1", 00:10:54.068 "uuid": "577323fd-2da7-48a0-b224-fc7d595c047b", 00:10:54.068 "is_configured": true, 00:10:54.068 "data_offset": 2048, 00:10:54.068 "data_size": 63488 00:10:54.068 }, 00:10:54.068 { 00:10:54.068 "name": null, 00:10:54.068 "uuid": "717846c8-2192-4831-a002-a7b700cb5f85", 00:10:54.068 "is_configured": false, 00:10:54.068 "data_offset": 0, 00:10:54.068 "data_size": 63488 00:10:54.068 }, 00:10:54.068 { 00:10:54.068 "name": "BaseBdev3", 00:10:54.069 "uuid": "f1c1f6ba-c588-490a-b106-9b75e649e509", 00:10:54.069 "is_configured": true, 00:10:54.069 "data_offset": 2048, 00:10:54.069 "data_size": 63488 00:10:54.069 } 00:10:54.069 ] 00:10:54.069 }' 00:10:54.069 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.069 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.328 [2024-11-05 11:26:53.586678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.328 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.587 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.587 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.587 "name": "Existed_Raid", 00:10:54.587 "uuid": "1ab1fa69-8210-4fba-8d39-42e1dc58b6b4", 00:10:54.587 "strip_size_kb": 64, 00:10:54.587 "state": "configuring", 00:10:54.587 "raid_level": "concat", 00:10:54.587 "superblock": true, 00:10:54.587 "num_base_bdevs": 3, 00:10:54.587 "num_base_bdevs_discovered": 1, 00:10:54.587 "num_base_bdevs_operational": 3, 00:10:54.587 "base_bdevs_list": [ 00:10:54.587 { 00:10:54.587 "name": "BaseBdev1", 00:10:54.587 "uuid": "577323fd-2da7-48a0-b224-fc7d595c047b", 00:10:54.587 "is_configured": true, 00:10:54.587 "data_offset": 2048, 00:10:54.587 "data_size": 63488 00:10:54.587 }, 00:10:54.587 { 00:10:54.587 "name": null, 00:10:54.587 "uuid": "717846c8-2192-4831-a002-a7b700cb5f85", 00:10:54.587 "is_configured": false, 00:10:54.587 "data_offset": 0, 00:10:54.587 "data_size": 63488 00:10:54.587 }, 00:10:54.587 { 00:10:54.587 "name": null, 00:10:54.587 "uuid": "f1c1f6ba-c588-490a-b106-9b75e649e509", 00:10:54.587 "is_configured": false, 00:10:54.587 "data_offset": 0, 00:10:54.587 "data_size": 63488 00:10:54.587 } 00:10:54.587 ] 00:10:54.587 }' 00:10:54.587 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.587 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.846 [2024-11-05 11:26:54.093796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.846 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.104 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.105 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.105 "name": "Existed_Raid", 00:10:55.105 "uuid": "1ab1fa69-8210-4fba-8d39-42e1dc58b6b4", 00:10:55.105 "strip_size_kb": 64, 00:10:55.105 "state": "configuring", 00:10:55.105 "raid_level": "concat", 00:10:55.105 "superblock": true, 00:10:55.105 "num_base_bdevs": 3, 00:10:55.105 "num_base_bdevs_discovered": 2, 00:10:55.105 "num_base_bdevs_operational": 3, 00:10:55.105 "base_bdevs_list": [ 00:10:55.105 { 00:10:55.105 "name": "BaseBdev1", 00:10:55.105 "uuid": "577323fd-2da7-48a0-b224-fc7d595c047b", 00:10:55.105 "is_configured": true, 00:10:55.105 "data_offset": 2048, 00:10:55.105 "data_size": 63488 00:10:55.105 }, 00:10:55.105 { 00:10:55.105 "name": null, 00:10:55.105 "uuid": "717846c8-2192-4831-a002-a7b700cb5f85", 00:10:55.105 "is_configured": false, 00:10:55.105 "data_offset": 0, 00:10:55.105 "data_size": 63488 00:10:55.105 }, 00:10:55.105 { 00:10:55.105 "name": "BaseBdev3", 00:10:55.105 "uuid": "f1c1f6ba-c588-490a-b106-9b75e649e509", 00:10:55.105 "is_configured": true, 00:10:55.105 "data_offset": 2048, 00:10:55.105 "data_size": 63488 00:10:55.105 } 00:10:55.105 ] 00:10:55.105 }' 00:10:55.105 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.105 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.363 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.363 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.363 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.363 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:55.363 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.363 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:55.363 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:55.363 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.363 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.363 [2024-11-05 11:26:54.573035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.623 "name": "Existed_Raid", 00:10:55.623 "uuid": "1ab1fa69-8210-4fba-8d39-42e1dc58b6b4", 00:10:55.623 "strip_size_kb": 64, 00:10:55.623 "state": "configuring", 00:10:55.623 "raid_level": "concat", 00:10:55.623 "superblock": true, 00:10:55.623 "num_base_bdevs": 3, 00:10:55.623 "num_base_bdevs_discovered": 1, 00:10:55.623 "num_base_bdevs_operational": 3, 00:10:55.623 "base_bdevs_list": [ 00:10:55.623 { 00:10:55.623 "name": null, 00:10:55.623 "uuid": "577323fd-2da7-48a0-b224-fc7d595c047b", 00:10:55.623 "is_configured": false, 00:10:55.623 "data_offset": 0, 00:10:55.623 "data_size": 63488 00:10:55.623 }, 00:10:55.623 { 00:10:55.623 "name": null, 00:10:55.623 "uuid": "717846c8-2192-4831-a002-a7b700cb5f85", 00:10:55.623 "is_configured": false, 00:10:55.623 "data_offset": 0, 00:10:55.623 "data_size": 63488 00:10:55.623 }, 00:10:55.623 { 00:10:55.623 "name": "BaseBdev3", 00:10:55.623 "uuid": "f1c1f6ba-c588-490a-b106-9b75e649e509", 00:10:55.623 "is_configured": true, 00:10:55.623 "data_offset": 2048, 00:10:55.623 "data_size": 63488 00:10:55.623 } 00:10:55.623 ] 00:10:55.623 }' 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.623 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.882 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.882 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.882 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:55.882 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.882 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.141 [2024-11-05 11:26:55.190479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.141 "name": "Existed_Raid", 00:10:56.141 "uuid": "1ab1fa69-8210-4fba-8d39-42e1dc58b6b4", 00:10:56.141 "strip_size_kb": 64, 00:10:56.141 "state": "configuring", 00:10:56.141 "raid_level": "concat", 00:10:56.141 "superblock": true, 00:10:56.141 "num_base_bdevs": 3, 00:10:56.141 "num_base_bdevs_discovered": 2, 00:10:56.141 "num_base_bdevs_operational": 3, 00:10:56.141 "base_bdevs_list": [ 00:10:56.141 { 00:10:56.141 "name": null, 00:10:56.141 "uuid": "577323fd-2da7-48a0-b224-fc7d595c047b", 00:10:56.141 "is_configured": false, 00:10:56.141 "data_offset": 0, 00:10:56.141 "data_size": 63488 00:10:56.141 }, 00:10:56.141 { 00:10:56.141 "name": "BaseBdev2", 00:10:56.141 "uuid": "717846c8-2192-4831-a002-a7b700cb5f85", 00:10:56.141 "is_configured": true, 00:10:56.141 "data_offset": 2048, 00:10:56.141 "data_size": 63488 00:10:56.141 }, 00:10:56.141 { 00:10:56.141 "name": "BaseBdev3", 00:10:56.141 "uuid": "f1c1f6ba-c588-490a-b106-9b75e649e509", 00:10:56.141 "is_configured": true, 00:10:56.141 "data_offset": 2048, 00:10:56.141 "data_size": 63488 00:10:56.141 } 00:10:56.141 ] 00:10:56.141 }' 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.141 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.400 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.400 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.400 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.400 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:56.400 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.659 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:56.659 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:56.659 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.659 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.659 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.659 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.659 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 577323fd-2da7-48a0-b224-fc7d595c047b 00:10:56.659 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.660 [2024-11-05 11:26:55.775120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:56.660 [2024-11-05 11:26:55.775419] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:56.660 [2024-11-05 11:26:55.775474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:56.660 [2024-11-05 11:26:55.775761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:56.660 [2024-11-05 11:26:55.775979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:56.660 NewBaseBdev 00:10:56.660 [2024-11-05 11:26:55.776036] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:56.660 [2024-11-05 11:26:55.776257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.660 [ 00:10:56.660 { 00:10:56.660 "name": "NewBaseBdev", 00:10:56.660 "aliases": [ 00:10:56.660 "577323fd-2da7-48a0-b224-fc7d595c047b" 00:10:56.660 ], 00:10:56.660 "product_name": "Malloc disk", 00:10:56.660 "block_size": 512, 00:10:56.660 "num_blocks": 65536, 00:10:56.660 "uuid": "577323fd-2da7-48a0-b224-fc7d595c047b", 00:10:56.660 "assigned_rate_limits": { 00:10:56.660 "rw_ios_per_sec": 0, 00:10:56.660 "rw_mbytes_per_sec": 0, 00:10:56.660 "r_mbytes_per_sec": 0, 00:10:56.660 "w_mbytes_per_sec": 0 00:10:56.660 }, 00:10:56.660 "claimed": true, 00:10:56.660 "claim_type": "exclusive_write", 00:10:56.660 "zoned": false, 00:10:56.660 "supported_io_types": { 00:10:56.660 "read": true, 00:10:56.660 "write": true, 00:10:56.660 "unmap": true, 00:10:56.660 "flush": true, 00:10:56.660 "reset": true, 00:10:56.660 "nvme_admin": false, 00:10:56.660 "nvme_io": false, 00:10:56.660 "nvme_io_md": false, 00:10:56.660 "write_zeroes": true, 00:10:56.660 "zcopy": true, 00:10:56.660 "get_zone_info": false, 00:10:56.660 "zone_management": false, 00:10:56.660 "zone_append": false, 00:10:56.660 "compare": false, 00:10:56.660 "compare_and_write": false, 00:10:56.660 "abort": true, 00:10:56.660 "seek_hole": false, 00:10:56.660 "seek_data": false, 00:10:56.660 "copy": true, 00:10:56.660 "nvme_iov_md": false 00:10:56.660 }, 00:10:56.660 "memory_domains": [ 00:10:56.660 { 00:10:56.660 "dma_device_id": "system", 00:10:56.660 "dma_device_type": 1 00:10:56.660 }, 00:10:56.660 { 00:10:56.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.660 "dma_device_type": 2 00:10:56.660 } 00:10:56.660 ], 00:10:56.660 "driver_specific": {} 00:10:56.660 } 00:10:56.660 ] 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.660 "name": "Existed_Raid", 00:10:56.660 "uuid": "1ab1fa69-8210-4fba-8d39-42e1dc58b6b4", 00:10:56.660 "strip_size_kb": 64, 00:10:56.660 "state": "online", 00:10:56.660 "raid_level": "concat", 00:10:56.660 "superblock": true, 00:10:56.660 "num_base_bdevs": 3, 00:10:56.660 "num_base_bdevs_discovered": 3, 00:10:56.660 "num_base_bdevs_operational": 3, 00:10:56.660 "base_bdevs_list": [ 00:10:56.660 { 00:10:56.660 "name": "NewBaseBdev", 00:10:56.660 "uuid": "577323fd-2da7-48a0-b224-fc7d595c047b", 00:10:56.660 "is_configured": true, 00:10:56.660 "data_offset": 2048, 00:10:56.660 "data_size": 63488 00:10:56.660 }, 00:10:56.660 { 00:10:56.660 "name": "BaseBdev2", 00:10:56.660 "uuid": "717846c8-2192-4831-a002-a7b700cb5f85", 00:10:56.660 "is_configured": true, 00:10:56.660 "data_offset": 2048, 00:10:56.660 "data_size": 63488 00:10:56.660 }, 00:10:56.660 { 00:10:56.660 "name": "BaseBdev3", 00:10:56.660 "uuid": "f1c1f6ba-c588-490a-b106-9b75e649e509", 00:10:56.660 "is_configured": true, 00:10:56.660 "data_offset": 2048, 00:10:56.660 "data_size": 63488 00:10:56.660 } 00:10:56.660 ] 00:10:56.660 }' 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.660 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.228 [2024-11-05 11:26:56.290633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.228 "name": "Existed_Raid", 00:10:57.228 "aliases": [ 00:10:57.228 "1ab1fa69-8210-4fba-8d39-42e1dc58b6b4" 00:10:57.228 ], 00:10:57.228 "product_name": "Raid Volume", 00:10:57.228 "block_size": 512, 00:10:57.228 "num_blocks": 190464, 00:10:57.228 "uuid": "1ab1fa69-8210-4fba-8d39-42e1dc58b6b4", 00:10:57.228 "assigned_rate_limits": { 00:10:57.228 "rw_ios_per_sec": 0, 00:10:57.228 "rw_mbytes_per_sec": 0, 00:10:57.228 "r_mbytes_per_sec": 0, 00:10:57.228 "w_mbytes_per_sec": 0 00:10:57.228 }, 00:10:57.228 "claimed": false, 00:10:57.228 "zoned": false, 00:10:57.228 "supported_io_types": { 00:10:57.228 "read": true, 00:10:57.228 "write": true, 00:10:57.228 "unmap": true, 00:10:57.228 "flush": true, 00:10:57.228 "reset": true, 00:10:57.228 "nvme_admin": false, 00:10:57.228 "nvme_io": false, 00:10:57.228 "nvme_io_md": false, 00:10:57.228 "write_zeroes": true, 00:10:57.228 "zcopy": false, 00:10:57.228 "get_zone_info": false, 00:10:57.228 "zone_management": false, 00:10:57.228 "zone_append": false, 00:10:57.228 "compare": false, 00:10:57.228 "compare_and_write": false, 00:10:57.228 "abort": false, 00:10:57.228 "seek_hole": false, 00:10:57.228 "seek_data": false, 00:10:57.228 "copy": false, 00:10:57.228 "nvme_iov_md": false 00:10:57.228 }, 00:10:57.228 "memory_domains": [ 00:10:57.228 { 00:10:57.228 "dma_device_id": "system", 00:10:57.228 "dma_device_type": 1 00:10:57.228 }, 00:10:57.228 { 00:10:57.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.228 "dma_device_type": 2 00:10:57.228 }, 00:10:57.228 { 00:10:57.228 "dma_device_id": "system", 00:10:57.228 "dma_device_type": 1 00:10:57.228 }, 00:10:57.228 { 00:10:57.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.228 "dma_device_type": 2 00:10:57.228 }, 00:10:57.228 { 00:10:57.228 "dma_device_id": "system", 00:10:57.228 "dma_device_type": 1 00:10:57.228 }, 00:10:57.228 { 00:10:57.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.228 "dma_device_type": 2 00:10:57.228 } 00:10:57.228 ], 00:10:57.228 "driver_specific": { 00:10:57.228 "raid": { 00:10:57.228 "uuid": "1ab1fa69-8210-4fba-8d39-42e1dc58b6b4", 00:10:57.228 "strip_size_kb": 64, 00:10:57.228 "state": "online", 00:10:57.228 "raid_level": "concat", 00:10:57.228 "superblock": true, 00:10:57.228 "num_base_bdevs": 3, 00:10:57.228 "num_base_bdevs_discovered": 3, 00:10:57.228 "num_base_bdevs_operational": 3, 00:10:57.228 "base_bdevs_list": [ 00:10:57.228 { 00:10:57.228 "name": "NewBaseBdev", 00:10:57.228 "uuid": "577323fd-2da7-48a0-b224-fc7d595c047b", 00:10:57.228 "is_configured": true, 00:10:57.228 "data_offset": 2048, 00:10:57.228 "data_size": 63488 00:10:57.228 }, 00:10:57.228 { 00:10:57.228 "name": "BaseBdev2", 00:10:57.228 "uuid": "717846c8-2192-4831-a002-a7b700cb5f85", 00:10:57.228 "is_configured": true, 00:10:57.228 "data_offset": 2048, 00:10:57.228 "data_size": 63488 00:10:57.228 }, 00:10:57.228 { 00:10:57.228 "name": "BaseBdev3", 00:10:57.228 "uuid": "f1c1f6ba-c588-490a-b106-9b75e649e509", 00:10:57.228 "is_configured": true, 00:10:57.228 "data_offset": 2048, 00:10:57.228 "data_size": 63488 00:10:57.228 } 00:10:57.228 ] 00:10:57.228 } 00:10:57.228 } 00:10:57.228 }' 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:57.228 BaseBdev2 00:10:57.228 BaseBdev3' 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.228 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.229 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.229 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.229 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.229 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:57.229 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.229 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.229 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.229 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.229 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.229 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.229 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.229 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:57.229 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.229 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.229 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.487 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.487 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.487 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.487 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.487 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.487 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.487 [2024-11-05 11:26:56.549891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.487 [2024-11-05 11:26:56.549985] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.487 [2024-11-05 11:26:56.550111] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.487 [2024-11-05 11:26:56.550220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.488 [2024-11-05 11:26:56.550271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:57.488 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.488 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66360 00:10:57.488 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66360 ']' 00:10:57.488 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66360 00:10:57.488 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:57.488 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:57.488 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66360 00:10:57.488 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:57.488 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:57.488 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66360' 00:10:57.488 killing process with pid 66360 00:10:57.488 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66360 00:10:57.488 [2024-11-05 11:26:56.592375] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.488 11:26:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66360 00:10:57.746 [2024-11-05 11:26:56.903542] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:59.123 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:59.123 00:10:59.123 real 0m10.863s 00:10:59.123 user 0m17.288s 00:10:59.123 sys 0m1.917s 00:10:59.123 11:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:59.123 11:26:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.123 ************************************ 00:10:59.123 END TEST raid_state_function_test_sb 00:10:59.123 ************************************ 00:10:59.123 11:26:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:59.123 11:26:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:59.123 11:26:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:59.123 11:26:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:59.123 ************************************ 00:10:59.123 START TEST raid_superblock_test 00:10:59.123 ************************************ 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66986 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66986 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 66986 ']' 00:10:59.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.123 11:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.124 11:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:59.124 11:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.124 11:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:59.124 11:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.124 [2024-11-05 11:26:58.186219] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:10:59.124 [2024-11-05 11:26:58.186421] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66986 ] 00:10:59.124 [2024-11-05 11:26:58.359005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.383 [2024-11-05 11:26:58.475744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.643 [2024-11-05 11:26:58.682351] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.643 [2024-11-05 11:26:58.682510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.904 malloc1 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.904 [2024-11-05 11:26:59.074095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:59.904 [2024-11-05 11:26:59.074270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.904 [2024-11-05 11:26:59.074317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:59.904 [2024-11-05 11:26:59.074349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.904 [2024-11-05 11:26:59.076711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.904 [2024-11-05 11:26:59.076788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:59.904 pt1 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.904 malloc2 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.904 [2024-11-05 11:26:59.133159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:59.904 [2024-11-05 11:26:59.133303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.904 [2024-11-05 11:26:59.133344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:59.904 [2024-11-05 11:26:59.133372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.904 [2024-11-05 11:26:59.135577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.904 [2024-11-05 11:26:59.135663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:59.904 pt2 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.904 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.165 malloc3 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.165 [2024-11-05 11:26:59.203640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:00.165 [2024-11-05 11:26:59.203708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.165 [2024-11-05 11:26:59.203730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:00.165 [2024-11-05 11:26:59.203740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.165 [2024-11-05 11:26:59.205846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.165 [2024-11-05 11:26:59.205885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:00.165 pt3 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.165 [2024-11-05 11:26:59.215664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:00.165 [2024-11-05 11:26:59.217511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.165 [2024-11-05 11:26:59.217578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:00.165 [2024-11-05 11:26:59.217728] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:00.165 [2024-11-05 11:26:59.217743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:00.165 [2024-11-05 11:26:59.217993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:00.165 [2024-11-05 11:26:59.218157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:00.165 [2024-11-05 11:26:59.218168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:00.165 [2024-11-05 11:26:59.218345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.165 "name": "raid_bdev1", 00:11:00.165 "uuid": "74e33cb7-fcf2-4018-8843-3a161cb8e007", 00:11:00.165 "strip_size_kb": 64, 00:11:00.165 "state": "online", 00:11:00.165 "raid_level": "concat", 00:11:00.165 "superblock": true, 00:11:00.165 "num_base_bdevs": 3, 00:11:00.165 "num_base_bdevs_discovered": 3, 00:11:00.165 "num_base_bdevs_operational": 3, 00:11:00.165 "base_bdevs_list": [ 00:11:00.165 { 00:11:00.165 "name": "pt1", 00:11:00.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.165 "is_configured": true, 00:11:00.165 "data_offset": 2048, 00:11:00.165 "data_size": 63488 00:11:00.165 }, 00:11:00.165 { 00:11:00.165 "name": "pt2", 00:11:00.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.165 "is_configured": true, 00:11:00.165 "data_offset": 2048, 00:11:00.165 "data_size": 63488 00:11:00.165 }, 00:11:00.165 { 00:11:00.165 "name": "pt3", 00:11:00.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.165 "is_configured": true, 00:11:00.165 "data_offset": 2048, 00:11:00.165 "data_size": 63488 00:11:00.165 } 00:11:00.165 ] 00:11:00.165 }' 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.165 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.425 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:00.425 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:00.425 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:00.425 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:00.425 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:00.425 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:00.425 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.425 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.425 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.425 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:00.425 [2024-11-05 11:26:59.643462] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.425 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.425 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.425 "name": "raid_bdev1", 00:11:00.425 "aliases": [ 00:11:00.425 "74e33cb7-fcf2-4018-8843-3a161cb8e007" 00:11:00.425 ], 00:11:00.425 "product_name": "Raid Volume", 00:11:00.425 "block_size": 512, 00:11:00.425 "num_blocks": 190464, 00:11:00.425 "uuid": "74e33cb7-fcf2-4018-8843-3a161cb8e007", 00:11:00.425 "assigned_rate_limits": { 00:11:00.425 "rw_ios_per_sec": 0, 00:11:00.425 "rw_mbytes_per_sec": 0, 00:11:00.425 "r_mbytes_per_sec": 0, 00:11:00.425 "w_mbytes_per_sec": 0 00:11:00.425 }, 00:11:00.425 "claimed": false, 00:11:00.425 "zoned": false, 00:11:00.425 "supported_io_types": { 00:11:00.425 "read": true, 00:11:00.425 "write": true, 00:11:00.425 "unmap": true, 00:11:00.425 "flush": true, 00:11:00.425 "reset": true, 00:11:00.425 "nvme_admin": false, 00:11:00.425 "nvme_io": false, 00:11:00.425 "nvme_io_md": false, 00:11:00.425 "write_zeroes": true, 00:11:00.425 "zcopy": false, 00:11:00.425 "get_zone_info": false, 00:11:00.425 "zone_management": false, 00:11:00.425 "zone_append": false, 00:11:00.425 "compare": false, 00:11:00.425 "compare_and_write": false, 00:11:00.425 "abort": false, 00:11:00.425 "seek_hole": false, 00:11:00.425 "seek_data": false, 00:11:00.425 "copy": false, 00:11:00.425 "nvme_iov_md": false 00:11:00.425 }, 00:11:00.425 "memory_domains": [ 00:11:00.425 { 00:11:00.425 "dma_device_id": "system", 00:11:00.425 "dma_device_type": 1 00:11:00.425 }, 00:11:00.425 { 00:11:00.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.425 "dma_device_type": 2 00:11:00.425 }, 00:11:00.425 { 00:11:00.425 "dma_device_id": "system", 00:11:00.425 "dma_device_type": 1 00:11:00.425 }, 00:11:00.425 { 00:11:00.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.425 "dma_device_type": 2 00:11:00.425 }, 00:11:00.425 { 00:11:00.425 "dma_device_id": "system", 00:11:00.425 "dma_device_type": 1 00:11:00.425 }, 00:11:00.425 { 00:11:00.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.425 "dma_device_type": 2 00:11:00.425 } 00:11:00.425 ], 00:11:00.425 "driver_specific": { 00:11:00.425 "raid": { 00:11:00.425 "uuid": "74e33cb7-fcf2-4018-8843-3a161cb8e007", 00:11:00.425 "strip_size_kb": 64, 00:11:00.425 "state": "online", 00:11:00.425 "raid_level": "concat", 00:11:00.426 "superblock": true, 00:11:00.426 "num_base_bdevs": 3, 00:11:00.426 "num_base_bdevs_discovered": 3, 00:11:00.426 "num_base_bdevs_operational": 3, 00:11:00.426 "base_bdevs_list": [ 00:11:00.426 { 00:11:00.426 "name": "pt1", 00:11:00.426 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.426 "is_configured": true, 00:11:00.426 "data_offset": 2048, 00:11:00.426 "data_size": 63488 00:11:00.426 }, 00:11:00.426 { 00:11:00.426 "name": "pt2", 00:11:00.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.426 "is_configured": true, 00:11:00.426 "data_offset": 2048, 00:11:00.426 "data_size": 63488 00:11:00.426 }, 00:11:00.426 { 00:11:00.426 "name": "pt3", 00:11:00.426 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.426 "is_configured": true, 00:11:00.426 "data_offset": 2048, 00:11:00.426 "data_size": 63488 00:11:00.426 } 00:11:00.426 ] 00:11:00.426 } 00:11:00.426 } 00:11:00.426 }' 00:11:00.426 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.685 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:00.685 pt2 00:11:00.685 pt3' 00:11:00.685 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.685 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.685 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.685 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:00.686 [2024-11-05 11:26:59.923085] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.686 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.946 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=74e33cb7-fcf2-4018-8843-3a161cb8e007 00:11:00.946 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 74e33cb7-fcf2-4018-8843-3a161cb8e007 ']' 00:11:00.946 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:00.946 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.946 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.946 [2024-11-05 11:26:59.970724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.946 [2024-11-05 11:26:59.970766] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.946 [2024-11-05 11:26:59.970858] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.946 [2024-11-05 11:26:59.970926] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.946 [2024-11-05 11:26:59.970936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:00.946 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.946 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:00.946 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.946 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.946 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.946 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.946 [2024-11-05 11:27:00.126528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:00.946 [2024-11-05 11:27:00.128655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:00.946 [2024-11-05 11:27:00.128781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:00.946 [2024-11-05 11:27:00.128843] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:00.946 [2024-11-05 11:27:00.128905] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:00.946 [2024-11-05 11:27:00.128927] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:00.946 [2024-11-05 11:27:00.128945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.946 [2024-11-05 11:27:00.128956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:00.946 request: 00:11:00.946 { 00:11:00.946 "name": "raid_bdev1", 00:11:00.946 "raid_level": "concat", 00:11:00.946 "base_bdevs": [ 00:11:00.946 "malloc1", 00:11:00.946 "malloc2", 00:11:00.946 "malloc3" 00:11:00.946 ], 00:11:00.946 "strip_size_kb": 64, 00:11:00.946 "superblock": false, 00:11:00.946 "method": "bdev_raid_create", 00:11:00.946 "req_id": 1 00:11:00.946 } 00:11:00.946 Got JSON-RPC error response 00:11:00.946 response: 00:11:00.946 { 00:11:00.946 "code": -17, 00:11:00.946 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:00.946 } 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.946 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.946 [2024-11-05 11:27:00.194325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:00.946 [2024-11-05 11:27:00.194436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.946 [2024-11-05 11:27:00.194475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:00.946 [2024-11-05 11:27:00.194504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.947 [2024-11-05 11:27:00.196873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.947 [2024-11-05 11:27:00.196961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:00.947 [2024-11-05 11:27:00.197090] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:00.947 [2024-11-05 11:27:00.197202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:00.947 pt1 00:11:00.947 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.947 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:00.947 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.947 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.947 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.947 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.947 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.947 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.947 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.947 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.947 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.947 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.947 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.947 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.947 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.207 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.207 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.207 "name": "raid_bdev1", 00:11:01.207 "uuid": "74e33cb7-fcf2-4018-8843-3a161cb8e007", 00:11:01.207 "strip_size_kb": 64, 00:11:01.207 "state": "configuring", 00:11:01.207 "raid_level": "concat", 00:11:01.207 "superblock": true, 00:11:01.207 "num_base_bdevs": 3, 00:11:01.207 "num_base_bdevs_discovered": 1, 00:11:01.207 "num_base_bdevs_operational": 3, 00:11:01.207 "base_bdevs_list": [ 00:11:01.207 { 00:11:01.207 "name": "pt1", 00:11:01.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.207 "is_configured": true, 00:11:01.207 "data_offset": 2048, 00:11:01.207 "data_size": 63488 00:11:01.207 }, 00:11:01.207 { 00:11:01.207 "name": null, 00:11:01.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.207 "is_configured": false, 00:11:01.207 "data_offset": 2048, 00:11:01.207 "data_size": 63488 00:11:01.207 }, 00:11:01.207 { 00:11:01.207 "name": null, 00:11:01.207 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.207 "is_configured": false, 00:11:01.207 "data_offset": 2048, 00:11:01.207 "data_size": 63488 00:11:01.207 } 00:11:01.207 ] 00:11:01.207 }' 00:11:01.207 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.207 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.467 [2024-11-05 11:27:00.665566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:01.467 [2024-11-05 11:27:00.665640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.467 [2024-11-05 11:27:00.665663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:01.467 [2024-11-05 11:27:00.665672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.467 [2024-11-05 11:27:00.666122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.467 [2024-11-05 11:27:00.666156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:01.467 [2024-11-05 11:27:00.666247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:01.467 [2024-11-05 11:27:00.666270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:01.467 pt2 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.467 [2024-11-05 11:27:00.673558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.467 "name": "raid_bdev1", 00:11:01.467 "uuid": "74e33cb7-fcf2-4018-8843-3a161cb8e007", 00:11:01.467 "strip_size_kb": 64, 00:11:01.467 "state": "configuring", 00:11:01.467 "raid_level": "concat", 00:11:01.467 "superblock": true, 00:11:01.467 "num_base_bdevs": 3, 00:11:01.467 "num_base_bdevs_discovered": 1, 00:11:01.467 "num_base_bdevs_operational": 3, 00:11:01.467 "base_bdevs_list": [ 00:11:01.467 { 00:11:01.467 "name": "pt1", 00:11:01.467 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.467 "is_configured": true, 00:11:01.467 "data_offset": 2048, 00:11:01.467 "data_size": 63488 00:11:01.467 }, 00:11:01.467 { 00:11:01.467 "name": null, 00:11:01.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.467 "is_configured": false, 00:11:01.467 "data_offset": 0, 00:11:01.467 "data_size": 63488 00:11:01.467 }, 00:11:01.467 { 00:11:01.467 "name": null, 00:11:01.467 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.467 "is_configured": false, 00:11:01.467 "data_offset": 2048, 00:11:01.467 "data_size": 63488 00:11:01.467 } 00:11:01.467 ] 00:11:01.467 }' 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.467 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.037 [2024-11-05 11:27:01.136773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:02.037 [2024-11-05 11:27:01.136943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.037 [2024-11-05 11:27:01.136994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:02.037 [2024-11-05 11:27:01.137058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.037 [2024-11-05 11:27:01.137638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.037 [2024-11-05 11:27:01.137717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:02.037 [2024-11-05 11:27:01.137843] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:02.037 [2024-11-05 11:27:01.137905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:02.037 pt2 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.037 [2024-11-05 11:27:01.148733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:02.037 [2024-11-05 11:27:01.148852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.037 [2024-11-05 11:27:01.148887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:02.037 [2024-11-05 11:27:01.148925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.037 [2024-11-05 11:27:01.149404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.037 [2024-11-05 11:27:01.149475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:02.037 [2024-11-05 11:27:01.149571] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:02.037 [2024-11-05 11:27:01.149624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:02.037 [2024-11-05 11:27:01.149779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:02.037 [2024-11-05 11:27:01.149845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:02.037 [2024-11-05 11:27:01.150195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:02.037 [2024-11-05 11:27:01.150392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:02.037 [2024-11-05 11:27:01.150436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:02.037 [2024-11-05 11:27:01.150608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.037 pt3 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.037 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.037 "name": "raid_bdev1", 00:11:02.037 "uuid": "74e33cb7-fcf2-4018-8843-3a161cb8e007", 00:11:02.037 "strip_size_kb": 64, 00:11:02.037 "state": "online", 00:11:02.037 "raid_level": "concat", 00:11:02.037 "superblock": true, 00:11:02.037 "num_base_bdevs": 3, 00:11:02.037 "num_base_bdevs_discovered": 3, 00:11:02.037 "num_base_bdevs_operational": 3, 00:11:02.037 "base_bdevs_list": [ 00:11:02.037 { 00:11:02.037 "name": "pt1", 00:11:02.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.038 "is_configured": true, 00:11:02.038 "data_offset": 2048, 00:11:02.038 "data_size": 63488 00:11:02.038 }, 00:11:02.038 { 00:11:02.038 "name": "pt2", 00:11:02.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.038 "is_configured": true, 00:11:02.038 "data_offset": 2048, 00:11:02.038 "data_size": 63488 00:11:02.038 }, 00:11:02.038 { 00:11:02.038 "name": "pt3", 00:11:02.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.038 "is_configured": true, 00:11:02.038 "data_offset": 2048, 00:11:02.038 "data_size": 63488 00:11:02.038 } 00:11:02.038 ] 00:11:02.038 }' 00:11:02.038 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.038 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:02.607 [2024-11-05 11:27:01.616335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:02.607 "name": "raid_bdev1", 00:11:02.607 "aliases": [ 00:11:02.607 "74e33cb7-fcf2-4018-8843-3a161cb8e007" 00:11:02.607 ], 00:11:02.607 "product_name": "Raid Volume", 00:11:02.607 "block_size": 512, 00:11:02.607 "num_blocks": 190464, 00:11:02.607 "uuid": "74e33cb7-fcf2-4018-8843-3a161cb8e007", 00:11:02.607 "assigned_rate_limits": { 00:11:02.607 "rw_ios_per_sec": 0, 00:11:02.607 "rw_mbytes_per_sec": 0, 00:11:02.607 "r_mbytes_per_sec": 0, 00:11:02.607 "w_mbytes_per_sec": 0 00:11:02.607 }, 00:11:02.607 "claimed": false, 00:11:02.607 "zoned": false, 00:11:02.607 "supported_io_types": { 00:11:02.607 "read": true, 00:11:02.607 "write": true, 00:11:02.607 "unmap": true, 00:11:02.607 "flush": true, 00:11:02.607 "reset": true, 00:11:02.607 "nvme_admin": false, 00:11:02.607 "nvme_io": false, 00:11:02.607 "nvme_io_md": false, 00:11:02.607 "write_zeroes": true, 00:11:02.607 "zcopy": false, 00:11:02.607 "get_zone_info": false, 00:11:02.607 "zone_management": false, 00:11:02.607 "zone_append": false, 00:11:02.607 "compare": false, 00:11:02.607 "compare_and_write": false, 00:11:02.607 "abort": false, 00:11:02.607 "seek_hole": false, 00:11:02.607 "seek_data": false, 00:11:02.607 "copy": false, 00:11:02.607 "nvme_iov_md": false 00:11:02.607 }, 00:11:02.607 "memory_domains": [ 00:11:02.607 { 00:11:02.607 "dma_device_id": "system", 00:11:02.607 "dma_device_type": 1 00:11:02.607 }, 00:11:02.607 { 00:11:02.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.607 "dma_device_type": 2 00:11:02.607 }, 00:11:02.607 { 00:11:02.607 "dma_device_id": "system", 00:11:02.607 "dma_device_type": 1 00:11:02.607 }, 00:11:02.607 { 00:11:02.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.607 "dma_device_type": 2 00:11:02.607 }, 00:11:02.607 { 00:11:02.607 "dma_device_id": "system", 00:11:02.607 "dma_device_type": 1 00:11:02.607 }, 00:11:02.607 { 00:11:02.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.607 "dma_device_type": 2 00:11:02.607 } 00:11:02.607 ], 00:11:02.607 "driver_specific": { 00:11:02.607 "raid": { 00:11:02.607 "uuid": "74e33cb7-fcf2-4018-8843-3a161cb8e007", 00:11:02.607 "strip_size_kb": 64, 00:11:02.607 "state": "online", 00:11:02.607 "raid_level": "concat", 00:11:02.607 "superblock": true, 00:11:02.607 "num_base_bdevs": 3, 00:11:02.607 "num_base_bdevs_discovered": 3, 00:11:02.607 "num_base_bdevs_operational": 3, 00:11:02.607 "base_bdevs_list": [ 00:11:02.607 { 00:11:02.607 "name": "pt1", 00:11:02.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.607 "is_configured": true, 00:11:02.607 "data_offset": 2048, 00:11:02.607 "data_size": 63488 00:11:02.607 }, 00:11:02.607 { 00:11:02.607 "name": "pt2", 00:11:02.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.607 "is_configured": true, 00:11:02.607 "data_offset": 2048, 00:11:02.607 "data_size": 63488 00:11:02.607 }, 00:11:02.607 { 00:11:02.607 "name": "pt3", 00:11:02.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.607 "is_configured": true, 00:11:02.607 "data_offset": 2048, 00:11:02.607 "data_size": 63488 00:11:02.607 } 00:11:02.607 ] 00:11:02.607 } 00:11:02.607 } 00:11:02.607 }' 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:02.607 pt2 00:11:02.607 pt3' 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:02.607 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.608 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.608 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.608 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.608 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.608 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.608 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.608 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.608 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:02.608 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.608 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.608 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.868 [2024-11-05 11:27:01.899774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 74e33cb7-fcf2-4018-8843-3a161cb8e007 '!=' 74e33cb7-fcf2-4018-8843-3a161cb8e007 ']' 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66986 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 66986 ']' 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 66986 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66986 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:02.868 killing process with pid 66986 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66986' 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 66986 00:11:02.868 [2024-11-05 11:27:01.984731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.868 [2024-11-05 11:27:01.984845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.868 [2024-11-05 11:27:01.984909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.868 [2024-11-05 11:27:01.984921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:02.868 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 66986 00:11:03.127 [2024-11-05 11:27:02.290420] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:04.530 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:04.530 00:11:04.530 real 0m5.331s 00:11:04.530 user 0m7.656s 00:11:04.530 sys 0m0.918s 00:11:04.530 11:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:04.530 11:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.530 ************************************ 00:11:04.530 END TEST raid_superblock_test 00:11:04.530 ************************************ 00:11:04.530 11:27:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:04.530 11:27:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:04.530 11:27:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:04.530 11:27:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.530 ************************************ 00:11:04.530 START TEST raid_read_error_test 00:11:04.530 ************************************ 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cHQGjE91R0 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67239 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67239 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67239 ']' 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:04.530 11:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.530 [2024-11-05 11:27:03.606441] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:11:04.530 [2024-11-05 11:27:03.606638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67239 ] 00:11:04.530 [2024-11-05 11:27:03.760838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.790 [2024-11-05 11:27:03.872526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.049 [2024-11-05 11:27:04.079448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.049 [2024-11-05 11:27:04.079512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.308 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:05.308 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:05.308 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.308 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:05.308 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.308 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.308 BaseBdev1_malloc 00:11:05.308 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.308 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:05.308 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.308 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.308 true 00:11:05.308 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.308 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:05.308 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.308 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.308 [2024-11-05 11:27:04.515954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:05.308 [2024-11-05 11:27:04.516094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.308 [2024-11-05 11:27:04.516142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:05.308 [2024-11-05 11:27:04.516177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.308 [2024-11-05 11:27:04.518313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.308 [2024-11-05 11:27:04.518392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:05.308 BaseBdev1 00:11:05.309 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.309 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.309 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:05.309 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.309 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.309 BaseBdev2_malloc 00:11:05.309 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.309 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:05.309 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.309 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.309 true 00:11:05.309 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.309 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:05.309 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.309 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.568 [2024-11-05 11:27:04.585607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:05.568 [2024-11-05 11:27:04.585674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.569 [2024-11-05 11:27:04.585692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:05.569 [2024-11-05 11:27:04.585703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.569 [2024-11-05 11:27:04.587931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.569 [2024-11-05 11:27:04.587977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:05.569 BaseBdev2 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.569 BaseBdev3_malloc 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.569 true 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.569 [2024-11-05 11:27:04.663706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:05.569 [2024-11-05 11:27:04.663772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.569 [2024-11-05 11:27:04.663788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:05.569 [2024-11-05 11:27:04.663799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.569 [2024-11-05 11:27:04.665902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.569 [2024-11-05 11:27:04.666015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:05.569 BaseBdev3 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.569 [2024-11-05 11:27:04.675755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.569 [2024-11-05 11:27:04.677534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.569 [2024-11-05 11:27:04.677614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.569 [2024-11-05 11:27:04.677797] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:05.569 [2024-11-05 11:27:04.677809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:05.569 [2024-11-05 11:27:04.678037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:05.569 [2024-11-05 11:27:04.678203] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:05.569 [2024-11-05 11:27:04.678217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:05.569 [2024-11-05 11:27:04.678353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.569 "name": "raid_bdev1", 00:11:05.569 "uuid": "203e262b-e1af-4ec7-8e45-86665c5de11b", 00:11:05.569 "strip_size_kb": 64, 00:11:05.569 "state": "online", 00:11:05.569 "raid_level": "concat", 00:11:05.569 "superblock": true, 00:11:05.569 "num_base_bdevs": 3, 00:11:05.569 "num_base_bdevs_discovered": 3, 00:11:05.569 "num_base_bdevs_operational": 3, 00:11:05.569 "base_bdevs_list": [ 00:11:05.569 { 00:11:05.569 "name": "BaseBdev1", 00:11:05.569 "uuid": "d8fc1bfd-0a60-5e03-9c24-c8c38a7d95e1", 00:11:05.569 "is_configured": true, 00:11:05.569 "data_offset": 2048, 00:11:05.569 "data_size": 63488 00:11:05.569 }, 00:11:05.569 { 00:11:05.569 "name": "BaseBdev2", 00:11:05.569 "uuid": "65a13f29-cd8d-5549-9419-6acc8bdbe1ea", 00:11:05.569 "is_configured": true, 00:11:05.569 "data_offset": 2048, 00:11:05.569 "data_size": 63488 00:11:05.569 }, 00:11:05.569 { 00:11:05.569 "name": "BaseBdev3", 00:11:05.569 "uuid": "53fc828c-78ed-598c-ae0e-8bc601d7f1b9", 00:11:05.569 "is_configured": true, 00:11:05.569 "data_offset": 2048, 00:11:05.569 "data_size": 63488 00:11:05.569 } 00:11:05.569 ] 00:11:05.569 }' 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.569 11:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.138 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:06.138 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:06.138 [2024-11-05 11:27:05.220336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.078 "name": "raid_bdev1", 00:11:07.078 "uuid": "203e262b-e1af-4ec7-8e45-86665c5de11b", 00:11:07.078 "strip_size_kb": 64, 00:11:07.078 "state": "online", 00:11:07.078 "raid_level": "concat", 00:11:07.078 "superblock": true, 00:11:07.078 "num_base_bdevs": 3, 00:11:07.078 "num_base_bdevs_discovered": 3, 00:11:07.078 "num_base_bdevs_operational": 3, 00:11:07.078 "base_bdevs_list": [ 00:11:07.078 { 00:11:07.078 "name": "BaseBdev1", 00:11:07.078 "uuid": "d8fc1bfd-0a60-5e03-9c24-c8c38a7d95e1", 00:11:07.078 "is_configured": true, 00:11:07.078 "data_offset": 2048, 00:11:07.078 "data_size": 63488 00:11:07.078 }, 00:11:07.078 { 00:11:07.078 "name": "BaseBdev2", 00:11:07.078 "uuid": "65a13f29-cd8d-5549-9419-6acc8bdbe1ea", 00:11:07.078 "is_configured": true, 00:11:07.078 "data_offset": 2048, 00:11:07.078 "data_size": 63488 00:11:07.078 }, 00:11:07.078 { 00:11:07.078 "name": "BaseBdev3", 00:11:07.078 "uuid": "53fc828c-78ed-598c-ae0e-8bc601d7f1b9", 00:11:07.078 "is_configured": true, 00:11:07.078 "data_offset": 2048, 00:11:07.078 "data_size": 63488 00:11:07.078 } 00:11:07.078 ] 00:11:07.078 }' 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.078 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.338 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:07.338 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.338 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.338 [2024-11-05 11:27:06.599350] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.338 [2024-11-05 11:27:06.599455] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.338 [2024-11-05 11:27:06.602217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.338 [2024-11-05 11:27:06.602315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.338 [2024-11-05 11:27:06.602374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.338 [2024-11-05 11:27:06.602415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:07.338 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.338 { 00:11:07.338 "results": [ 00:11:07.338 { 00:11:07.338 "job": "raid_bdev1", 00:11:07.338 "core_mask": "0x1", 00:11:07.338 "workload": "randrw", 00:11:07.338 "percentage": 50, 00:11:07.338 "status": "finished", 00:11:07.338 "queue_depth": 1, 00:11:07.338 "io_size": 131072, 00:11:07.338 "runtime": 1.379942, 00:11:07.338 "iops": 15578.915635584684, 00:11:07.338 "mibps": 1947.3644544480856, 00:11:07.338 "io_failed": 1, 00:11:07.338 "io_timeout": 0, 00:11:07.338 "avg_latency_us": 89.14938836395558, 00:11:07.338 "min_latency_us": 25.823580786026202, 00:11:07.338 "max_latency_us": 1373.6803493449781 00:11:07.338 } 00:11:07.338 ], 00:11:07.338 "core_count": 1 00:11:07.338 } 00:11:07.338 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67239 00:11:07.338 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67239 ']' 00:11:07.338 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67239 00:11:07.338 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:07.598 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:07.598 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67239 00:11:07.598 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:07.598 killing process with pid 67239 00:11:07.598 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:07.598 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67239' 00:11:07.598 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67239 00:11:07.598 [2024-11-05 11:27:06.651785] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.598 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67239 00:11:07.858 [2024-11-05 11:27:06.892108] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.239 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:09.239 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cHQGjE91R0 00:11:09.239 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:09.239 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:09.239 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:09.239 ************************************ 00:11:09.239 END TEST raid_read_error_test 00:11:09.239 ************************************ 00:11:09.239 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:09.239 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:09.239 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:09.239 00:11:09.239 real 0m4.606s 00:11:09.239 user 0m5.476s 00:11:09.239 sys 0m0.568s 00:11:09.239 11:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.239 11:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.239 11:27:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:09.239 11:27:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:09.239 11:27:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:09.239 11:27:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.239 ************************************ 00:11:09.239 START TEST raid_write_error_test 00:11:09.239 ************************************ 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.c9Vc5gk4xm 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67385 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67385 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67385 ']' 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:09.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:09.239 11:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.239 [2024-11-05 11:27:08.293537] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:11:09.239 [2024-11-05 11:27:08.293672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67385 ] 00:11:09.239 [2024-11-05 11:27:08.472947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.497 [2024-11-05 11:27:08.604148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.756 [2024-11-05 11:27:08.849978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.757 [2024-11-05 11:27:08.850046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.017 BaseBdev1_malloc 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.017 true 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.017 [2024-11-05 11:27:09.227723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:10.017 [2024-11-05 11:27:09.227790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.017 [2024-11-05 11:27:09.227814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:10.017 [2024-11-05 11:27:09.227827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.017 [2024-11-05 11:27:09.230302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.017 [2024-11-05 11:27:09.230348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:10.017 BaseBdev1 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.017 BaseBdev2_malloc 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.017 true 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.017 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.278 [2024-11-05 11:27:09.297524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:10.278 [2024-11-05 11:27:09.297594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.278 [2024-11-05 11:27:09.297616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:10.278 [2024-11-05 11:27:09.297629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.278 [2024-11-05 11:27:09.300070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.278 [2024-11-05 11:27:09.300122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:10.278 BaseBdev2 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.278 BaseBdev3_malloc 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.278 true 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.278 [2024-11-05 11:27:09.378140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:10.278 [2024-11-05 11:27:09.378254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.278 [2024-11-05 11:27:09.378281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:10.278 [2024-11-05 11:27:09.378293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.278 [2024-11-05 11:27:09.380688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.278 [2024-11-05 11:27:09.380730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:10.278 BaseBdev3 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.278 [2024-11-05 11:27:09.390209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.278 [2024-11-05 11:27:09.392294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.278 [2024-11-05 11:27:09.392399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.278 [2024-11-05 11:27:09.392621] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:10.278 [2024-11-05 11:27:09.392635] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:10.278 [2024-11-05 11:27:09.392916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:10.278 [2024-11-05 11:27:09.393091] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:10.278 [2024-11-05 11:27:09.393106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:10.278 [2024-11-05 11:27:09.393285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.278 "name": "raid_bdev1", 00:11:10.278 "uuid": "c8870837-f3ea-4e76-b80e-1d32e4ae1c13", 00:11:10.278 "strip_size_kb": 64, 00:11:10.278 "state": "online", 00:11:10.278 "raid_level": "concat", 00:11:10.278 "superblock": true, 00:11:10.278 "num_base_bdevs": 3, 00:11:10.278 "num_base_bdevs_discovered": 3, 00:11:10.278 "num_base_bdevs_operational": 3, 00:11:10.278 "base_bdevs_list": [ 00:11:10.278 { 00:11:10.278 "name": "BaseBdev1", 00:11:10.278 "uuid": "71369cd1-2653-5205-bb99-26950d442605", 00:11:10.278 "is_configured": true, 00:11:10.278 "data_offset": 2048, 00:11:10.278 "data_size": 63488 00:11:10.278 }, 00:11:10.278 { 00:11:10.278 "name": "BaseBdev2", 00:11:10.278 "uuid": "cd06ada6-cc20-5e26-88cd-48441d98438b", 00:11:10.278 "is_configured": true, 00:11:10.278 "data_offset": 2048, 00:11:10.278 "data_size": 63488 00:11:10.278 }, 00:11:10.278 { 00:11:10.278 "name": "BaseBdev3", 00:11:10.278 "uuid": "27083ca2-f689-5c06-bb9a-278ea597ab7b", 00:11:10.278 "is_configured": true, 00:11:10.278 "data_offset": 2048, 00:11:10.278 "data_size": 63488 00:11:10.278 } 00:11:10.278 ] 00:11:10.278 }' 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.278 11:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.686 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:10.686 11:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:10.686 [2024-11-05 11:27:09.942886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.637 "name": "raid_bdev1", 00:11:11.637 "uuid": "c8870837-f3ea-4e76-b80e-1d32e4ae1c13", 00:11:11.637 "strip_size_kb": 64, 00:11:11.637 "state": "online", 00:11:11.637 "raid_level": "concat", 00:11:11.637 "superblock": true, 00:11:11.637 "num_base_bdevs": 3, 00:11:11.637 "num_base_bdevs_discovered": 3, 00:11:11.637 "num_base_bdevs_operational": 3, 00:11:11.637 "base_bdevs_list": [ 00:11:11.637 { 00:11:11.637 "name": "BaseBdev1", 00:11:11.637 "uuid": "71369cd1-2653-5205-bb99-26950d442605", 00:11:11.637 "is_configured": true, 00:11:11.637 "data_offset": 2048, 00:11:11.637 "data_size": 63488 00:11:11.637 }, 00:11:11.637 { 00:11:11.637 "name": "BaseBdev2", 00:11:11.637 "uuid": "cd06ada6-cc20-5e26-88cd-48441d98438b", 00:11:11.637 "is_configured": true, 00:11:11.637 "data_offset": 2048, 00:11:11.637 "data_size": 63488 00:11:11.637 }, 00:11:11.637 { 00:11:11.637 "name": "BaseBdev3", 00:11:11.637 "uuid": "27083ca2-f689-5c06-bb9a-278ea597ab7b", 00:11:11.637 "is_configured": true, 00:11:11.637 "data_offset": 2048, 00:11:11.637 "data_size": 63488 00:11:11.637 } 00:11:11.637 ] 00:11:11.637 }' 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.637 11:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.206 11:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:12.206 11:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.206 11:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.206 [2024-11-05 11:27:11.312343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.206 [2024-11-05 11:27:11.312451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.206 [2024-11-05 11:27:11.315690] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.206 [2024-11-05 11:27:11.315787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.206 [2024-11-05 11:27:11.315840] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.206 [2024-11-05 11:27:11.315854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:12.206 { 00:11:12.206 "results": [ 00:11:12.206 { 00:11:12.206 "job": "raid_bdev1", 00:11:12.206 "core_mask": "0x1", 00:11:12.206 "workload": "randrw", 00:11:12.206 "percentage": 50, 00:11:12.206 "status": "finished", 00:11:12.206 "queue_depth": 1, 00:11:12.206 "io_size": 131072, 00:11:12.206 "runtime": 1.369936, 00:11:12.206 "iops": 13396.246247999907, 00:11:12.206 "mibps": 1674.5307809999883, 00:11:12.206 "io_failed": 1, 00:11:12.206 "io_timeout": 0, 00:11:12.206 "avg_latency_us": 103.23888173631288, 00:11:12.206 "min_latency_us": 29.065502183406114, 00:11:12.206 "max_latency_us": 1760.0279475982534 00:11:12.206 } 00:11:12.206 ], 00:11:12.206 "core_count": 1 00:11:12.206 } 00:11:12.206 11:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.206 11:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67385 00:11:12.206 11:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67385 ']' 00:11:12.206 11:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67385 00:11:12.206 11:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:12.206 11:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:12.206 11:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67385 00:11:12.206 killing process with pid 67385 00:11:12.206 11:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:12.206 11:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:12.206 11:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67385' 00:11:12.206 11:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67385 00:11:12.206 [2024-11-05 11:27:11.360538] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.206 11:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67385 00:11:12.465 [2024-11-05 11:27:11.632981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.842 11:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.c9Vc5gk4xm 00:11:13.842 11:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:13.842 11:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:13.842 11:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:13.842 ************************************ 00:11:13.842 END TEST raid_write_error_test 00:11:13.842 ************************************ 00:11:13.842 11:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:13.842 11:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:13.842 11:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:13.842 11:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:13.842 00:11:13.842 real 0m4.857s 00:11:13.842 user 0m5.755s 00:11:13.842 sys 0m0.614s 00:11:13.842 11:27:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:13.842 11:27:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.842 11:27:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:13.842 11:27:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:13.842 11:27:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:13.842 11:27:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:13.842 11:27:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.842 ************************************ 00:11:13.842 START TEST raid_state_function_test 00:11:13.842 ************************************ 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:13.842 Process raid pid: 67523 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67523 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67523' 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67523 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67523 ']' 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:13.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:13.842 11:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.101 [2024-11-05 11:27:13.195370] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:11:14.101 [2024-11-05 11:27:13.195607] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.101 [2024-11-05 11:27:13.375209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.360 [2024-11-05 11:27:13.509350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.620 [2024-11-05 11:27:13.738829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.620 [2024-11-05 11:27:13.738893] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.879 [2024-11-05 11:27:14.100318] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.879 [2024-11-05 11:27:14.100465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.879 [2024-11-05 11:27:14.100498] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.879 [2024-11-05 11:27:14.100510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.879 [2024-11-05 11:27:14.100517] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.879 [2024-11-05 11:27:14.100528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.879 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.149 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.149 "name": "Existed_Raid", 00:11:15.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.149 "strip_size_kb": 0, 00:11:15.149 "state": "configuring", 00:11:15.149 "raid_level": "raid1", 00:11:15.149 "superblock": false, 00:11:15.149 "num_base_bdevs": 3, 00:11:15.149 "num_base_bdevs_discovered": 0, 00:11:15.149 "num_base_bdevs_operational": 3, 00:11:15.149 "base_bdevs_list": [ 00:11:15.149 { 00:11:15.149 "name": "BaseBdev1", 00:11:15.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.149 "is_configured": false, 00:11:15.149 "data_offset": 0, 00:11:15.149 "data_size": 0 00:11:15.149 }, 00:11:15.149 { 00:11:15.149 "name": "BaseBdev2", 00:11:15.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.149 "is_configured": false, 00:11:15.149 "data_offset": 0, 00:11:15.149 "data_size": 0 00:11:15.149 }, 00:11:15.149 { 00:11:15.149 "name": "BaseBdev3", 00:11:15.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.149 "is_configured": false, 00:11:15.149 "data_offset": 0, 00:11:15.149 "data_size": 0 00:11:15.149 } 00:11:15.149 ] 00:11:15.149 }' 00:11:15.149 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.149 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.427 [2024-11-05 11:27:14.555499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:15.427 [2024-11-05 11:27:14.555635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.427 [2024-11-05 11:27:14.563460] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.427 [2024-11-05 11:27:14.563570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.427 [2024-11-05 11:27:14.563626] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.427 [2024-11-05 11:27:14.563654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.427 [2024-11-05 11:27:14.563686] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.427 [2024-11-05 11:27:14.563724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.427 [2024-11-05 11:27:14.615769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.427 BaseBdev1 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.427 [ 00:11:15.427 { 00:11:15.427 "name": "BaseBdev1", 00:11:15.427 "aliases": [ 00:11:15.427 "bcb75f14-9f06-469f-89eb-90d69ecfb1a7" 00:11:15.427 ], 00:11:15.427 "product_name": "Malloc disk", 00:11:15.427 "block_size": 512, 00:11:15.427 "num_blocks": 65536, 00:11:15.427 "uuid": "bcb75f14-9f06-469f-89eb-90d69ecfb1a7", 00:11:15.427 "assigned_rate_limits": { 00:11:15.427 "rw_ios_per_sec": 0, 00:11:15.427 "rw_mbytes_per_sec": 0, 00:11:15.427 "r_mbytes_per_sec": 0, 00:11:15.427 "w_mbytes_per_sec": 0 00:11:15.427 }, 00:11:15.427 "claimed": true, 00:11:15.427 "claim_type": "exclusive_write", 00:11:15.427 "zoned": false, 00:11:15.427 "supported_io_types": { 00:11:15.427 "read": true, 00:11:15.427 "write": true, 00:11:15.427 "unmap": true, 00:11:15.427 "flush": true, 00:11:15.427 "reset": true, 00:11:15.427 "nvme_admin": false, 00:11:15.427 "nvme_io": false, 00:11:15.427 "nvme_io_md": false, 00:11:15.427 "write_zeroes": true, 00:11:15.427 "zcopy": true, 00:11:15.427 "get_zone_info": false, 00:11:15.427 "zone_management": false, 00:11:15.427 "zone_append": false, 00:11:15.427 "compare": false, 00:11:15.427 "compare_and_write": false, 00:11:15.427 "abort": true, 00:11:15.427 "seek_hole": false, 00:11:15.427 "seek_data": false, 00:11:15.427 "copy": true, 00:11:15.427 "nvme_iov_md": false 00:11:15.427 }, 00:11:15.427 "memory_domains": [ 00:11:15.427 { 00:11:15.427 "dma_device_id": "system", 00:11:15.427 "dma_device_type": 1 00:11:15.427 }, 00:11:15.427 { 00:11:15.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.427 "dma_device_type": 2 00:11:15.427 } 00:11:15.427 ], 00:11:15.427 "driver_specific": {} 00:11:15.427 } 00:11:15.427 ] 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.427 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.686 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.686 "name": "Existed_Raid", 00:11:15.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.686 "strip_size_kb": 0, 00:11:15.686 "state": "configuring", 00:11:15.686 "raid_level": "raid1", 00:11:15.686 "superblock": false, 00:11:15.686 "num_base_bdevs": 3, 00:11:15.686 "num_base_bdevs_discovered": 1, 00:11:15.686 "num_base_bdevs_operational": 3, 00:11:15.686 "base_bdevs_list": [ 00:11:15.686 { 00:11:15.686 "name": "BaseBdev1", 00:11:15.686 "uuid": "bcb75f14-9f06-469f-89eb-90d69ecfb1a7", 00:11:15.686 "is_configured": true, 00:11:15.686 "data_offset": 0, 00:11:15.686 "data_size": 65536 00:11:15.686 }, 00:11:15.686 { 00:11:15.686 "name": "BaseBdev2", 00:11:15.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.686 "is_configured": false, 00:11:15.686 "data_offset": 0, 00:11:15.686 "data_size": 0 00:11:15.686 }, 00:11:15.686 { 00:11:15.686 "name": "BaseBdev3", 00:11:15.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.686 "is_configured": false, 00:11:15.686 "data_offset": 0, 00:11:15.686 "data_size": 0 00:11:15.686 } 00:11:15.686 ] 00:11:15.686 }' 00:11:15.686 11:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.686 11:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.945 [2024-11-05 11:27:15.147113] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:15.945 [2024-11-05 11:27:15.147193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.945 [2024-11-05 11:27:15.155160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.945 [2024-11-05 11:27:15.157298] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.945 [2024-11-05 11:27:15.157343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.945 [2024-11-05 11:27:15.157356] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.945 [2024-11-05 11:27:15.157366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.945 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.945 "name": "Existed_Raid", 00:11:15.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.945 "strip_size_kb": 0, 00:11:15.945 "state": "configuring", 00:11:15.945 "raid_level": "raid1", 00:11:15.945 "superblock": false, 00:11:15.945 "num_base_bdevs": 3, 00:11:15.945 "num_base_bdevs_discovered": 1, 00:11:15.945 "num_base_bdevs_operational": 3, 00:11:15.945 "base_bdevs_list": [ 00:11:15.945 { 00:11:15.945 "name": "BaseBdev1", 00:11:15.945 "uuid": "bcb75f14-9f06-469f-89eb-90d69ecfb1a7", 00:11:15.945 "is_configured": true, 00:11:15.945 "data_offset": 0, 00:11:15.945 "data_size": 65536 00:11:15.945 }, 00:11:15.945 { 00:11:15.945 "name": "BaseBdev2", 00:11:15.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.945 "is_configured": false, 00:11:15.945 "data_offset": 0, 00:11:15.946 "data_size": 0 00:11:15.946 }, 00:11:15.946 { 00:11:15.946 "name": "BaseBdev3", 00:11:15.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.946 "is_configured": false, 00:11:15.946 "data_offset": 0, 00:11:15.946 "data_size": 0 00:11:15.946 } 00:11:15.946 ] 00:11:15.946 }' 00:11:15.946 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.946 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.513 [2024-11-05 11:27:15.680834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.513 BaseBdev2 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.513 [ 00:11:16.513 { 00:11:16.513 "name": "BaseBdev2", 00:11:16.513 "aliases": [ 00:11:16.513 "bf297d41-7e99-45b2-8fbe-f7287659c5cb" 00:11:16.513 ], 00:11:16.513 "product_name": "Malloc disk", 00:11:16.513 "block_size": 512, 00:11:16.513 "num_blocks": 65536, 00:11:16.513 "uuid": "bf297d41-7e99-45b2-8fbe-f7287659c5cb", 00:11:16.513 "assigned_rate_limits": { 00:11:16.513 "rw_ios_per_sec": 0, 00:11:16.513 "rw_mbytes_per_sec": 0, 00:11:16.513 "r_mbytes_per_sec": 0, 00:11:16.513 "w_mbytes_per_sec": 0 00:11:16.513 }, 00:11:16.513 "claimed": true, 00:11:16.513 "claim_type": "exclusive_write", 00:11:16.513 "zoned": false, 00:11:16.513 "supported_io_types": { 00:11:16.513 "read": true, 00:11:16.513 "write": true, 00:11:16.513 "unmap": true, 00:11:16.513 "flush": true, 00:11:16.513 "reset": true, 00:11:16.513 "nvme_admin": false, 00:11:16.513 "nvme_io": false, 00:11:16.513 "nvme_io_md": false, 00:11:16.513 "write_zeroes": true, 00:11:16.513 "zcopy": true, 00:11:16.513 "get_zone_info": false, 00:11:16.513 "zone_management": false, 00:11:16.513 "zone_append": false, 00:11:16.513 "compare": false, 00:11:16.513 "compare_and_write": false, 00:11:16.513 "abort": true, 00:11:16.513 "seek_hole": false, 00:11:16.513 "seek_data": false, 00:11:16.513 "copy": true, 00:11:16.513 "nvme_iov_md": false 00:11:16.513 }, 00:11:16.513 "memory_domains": [ 00:11:16.513 { 00:11:16.513 "dma_device_id": "system", 00:11:16.513 "dma_device_type": 1 00:11:16.513 }, 00:11:16.513 { 00:11:16.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.513 "dma_device_type": 2 00:11:16.513 } 00:11:16.513 ], 00:11:16.513 "driver_specific": {} 00:11:16.513 } 00:11:16.513 ] 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.513 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.513 "name": "Existed_Raid", 00:11:16.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.513 "strip_size_kb": 0, 00:11:16.513 "state": "configuring", 00:11:16.513 "raid_level": "raid1", 00:11:16.513 "superblock": false, 00:11:16.513 "num_base_bdevs": 3, 00:11:16.513 "num_base_bdevs_discovered": 2, 00:11:16.513 "num_base_bdevs_operational": 3, 00:11:16.513 "base_bdevs_list": [ 00:11:16.514 { 00:11:16.514 "name": "BaseBdev1", 00:11:16.514 "uuid": "bcb75f14-9f06-469f-89eb-90d69ecfb1a7", 00:11:16.514 "is_configured": true, 00:11:16.514 "data_offset": 0, 00:11:16.514 "data_size": 65536 00:11:16.514 }, 00:11:16.514 { 00:11:16.514 "name": "BaseBdev2", 00:11:16.514 "uuid": "bf297d41-7e99-45b2-8fbe-f7287659c5cb", 00:11:16.514 "is_configured": true, 00:11:16.514 "data_offset": 0, 00:11:16.514 "data_size": 65536 00:11:16.514 }, 00:11:16.514 { 00:11:16.514 "name": "BaseBdev3", 00:11:16.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.514 "is_configured": false, 00:11:16.514 "data_offset": 0, 00:11:16.514 "data_size": 0 00:11:16.514 } 00:11:16.514 ] 00:11:16.514 }' 00:11:16.514 11:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.514 11:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.082 [2024-11-05 11:27:16.212639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.082 [2024-11-05 11:27:16.212791] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:17.082 [2024-11-05 11:27:16.212826] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:17.082 [2024-11-05 11:27:16.213214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:17.082 [2024-11-05 11:27:16.213467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:17.082 BaseBdev3 00:11:17.082 [2024-11-05 11:27:16.213515] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:17.082 [2024-11-05 11:27:16.213814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.082 [ 00:11:17.082 { 00:11:17.082 "name": "BaseBdev3", 00:11:17.082 "aliases": [ 00:11:17.082 "a8684469-1a6b-425c-a58a-ed327e583763" 00:11:17.082 ], 00:11:17.082 "product_name": "Malloc disk", 00:11:17.082 "block_size": 512, 00:11:17.082 "num_blocks": 65536, 00:11:17.082 "uuid": "a8684469-1a6b-425c-a58a-ed327e583763", 00:11:17.082 "assigned_rate_limits": { 00:11:17.082 "rw_ios_per_sec": 0, 00:11:17.082 "rw_mbytes_per_sec": 0, 00:11:17.082 "r_mbytes_per_sec": 0, 00:11:17.082 "w_mbytes_per_sec": 0 00:11:17.082 }, 00:11:17.082 "claimed": true, 00:11:17.082 "claim_type": "exclusive_write", 00:11:17.082 "zoned": false, 00:11:17.082 "supported_io_types": { 00:11:17.082 "read": true, 00:11:17.082 "write": true, 00:11:17.082 "unmap": true, 00:11:17.082 "flush": true, 00:11:17.082 "reset": true, 00:11:17.082 "nvme_admin": false, 00:11:17.082 "nvme_io": false, 00:11:17.082 "nvme_io_md": false, 00:11:17.082 "write_zeroes": true, 00:11:17.082 "zcopy": true, 00:11:17.082 "get_zone_info": false, 00:11:17.082 "zone_management": false, 00:11:17.082 "zone_append": false, 00:11:17.082 "compare": false, 00:11:17.082 "compare_and_write": false, 00:11:17.082 "abort": true, 00:11:17.082 "seek_hole": false, 00:11:17.082 "seek_data": false, 00:11:17.082 "copy": true, 00:11:17.082 "nvme_iov_md": false 00:11:17.082 }, 00:11:17.082 "memory_domains": [ 00:11:17.082 { 00:11:17.082 "dma_device_id": "system", 00:11:17.082 "dma_device_type": 1 00:11:17.082 }, 00:11:17.082 { 00:11:17.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.082 "dma_device_type": 2 00:11:17.082 } 00:11:17.082 ], 00:11:17.082 "driver_specific": {} 00:11:17.082 } 00:11:17.082 ] 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.082 "name": "Existed_Raid", 00:11:17.082 "uuid": "81585f09-750a-479b-932c-f5418448de16", 00:11:17.082 "strip_size_kb": 0, 00:11:17.082 "state": "online", 00:11:17.082 "raid_level": "raid1", 00:11:17.082 "superblock": false, 00:11:17.082 "num_base_bdevs": 3, 00:11:17.082 "num_base_bdevs_discovered": 3, 00:11:17.082 "num_base_bdevs_operational": 3, 00:11:17.082 "base_bdevs_list": [ 00:11:17.082 { 00:11:17.082 "name": "BaseBdev1", 00:11:17.082 "uuid": "bcb75f14-9f06-469f-89eb-90d69ecfb1a7", 00:11:17.082 "is_configured": true, 00:11:17.082 "data_offset": 0, 00:11:17.082 "data_size": 65536 00:11:17.082 }, 00:11:17.082 { 00:11:17.082 "name": "BaseBdev2", 00:11:17.082 "uuid": "bf297d41-7e99-45b2-8fbe-f7287659c5cb", 00:11:17.082 "is_configured": true, 00:11:17.082 "data_offset": 0, 00:11:17.082 "data_size": 65536 00:11:17.082 }, 00:11:17.082 { 00:11:17.082 "name": "BaseBdev3", 00:11:17.082 "uuid": "a8684469-1a6b-425c-a58a-ed327e583763", 00:11:17.082 "is_configured": true, 00:11:17.082 "data_offset": 0, 00:11:17.082 "data_size": 65536 00:11:17.082 } 00:11:17.082 ] 00:11:17.082 }' 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.082 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.651 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:17.651 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:17.651 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.651 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.651 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.651 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.651 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:17.651 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.651 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.651 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.651 [2024-11-05 11:27:16.728412] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.651 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.651 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:17.651 "name": "Existed_Raid", 00:11:17.651 "aliases": [ 00:11:17.651 "81585f09-750a-479b-932c-f5418448de16" 00:11:17.651 ], 00:11:17.651 "product_name": "Raid Volume", 00:11:17.651 "block_size": 512, 00:11:17.651 "num_blocks": 65536, 00:11:17.651 "uuid": "81585f09-750a-479b-932c-f5418448de16", 00:11:17.651 "assigned_rate_limits": { 00:11:17.651 "rw_ios_per_sec": 0, 00:11:17.651 "rw_mbytes_per_sec": 0, 00:11:17.651 "r_mbytes_per_sec": 0, 00:11:17.651 "w_mbytes_per_sec": 0 00:11:17.651 }, 00:11:17.651 "claimed": false, 00:11:17.651 "zoned": false, 00:11:17.651 "supported_io_types": { 00:11:17.651 "read": true, 00:11:17.651 "write": true, 00:11:17.651 "unmap": false, 00:11:17.651 "flush": false, 00:11:17.651 "reset": true, 00:11:17.651 "nvme_admin": false, 00:11:17.651 "nvme_io": false, 00:11:17.651 "nvme_io_md": false, 00:11:17.651 "write_zeroes": true, 00:11:17.651 "zcopy": false, 00:11:17.651 "get_zone_info": false, 00:11:17.651 "zone_management": false, 00:11:17.651 "zone_append": false, 00:11:17.651 "compare": false, 00:11:17.651 "compare_and_write": false, 00:11:17.651 "abort": false, 00:11:17.651 "seek_hole": false, 00:11:17.651 "seek_data": false, 00:11:17.651 "copy": false, 00:11:17.651 "nvme_iov_md": false 00:11:17.651 }, 00:11:17.651 "memory_domains": [ 00:11:17.651 { 00:11:17.651 "dma_device_id": "system", 00:11:17.651 "dma_device_type": 1 00:11:17.651 }, 00:11:17.651 { 00:11:17.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.651 "dma_device_type": 2 00:11:17.651 }, 00:11:17.651 { 00:11:17.651 "dma_device_id": "system", 00:11:17.651 "dma_device_type": 1 00:11:17.651 }, 00:11:17.651 { 00:11:17.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.651 "dma_device_type": 2 00:11:17.651 }, 00:11:17.651 { 00:11:17.651 "dma_device_id": "system", 00:11:17.651 "dma_device_type": 1 00:11:17.651 }, 00:11:17.651 { 00:11:17.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.651 "dma_device_type": 2 00:11:17.651 } 00:11:17.651 ], 00:11:17.651 "driver_specific": { 00:11:17.651 "raid": { 00:11:17.651 "uuid": "81585f09-750a-479b-932c-f5418448de16", 00:11:17.651 "strip_size_kb": 0, 00:11:17.651 "state": "online", 00:11:17.651 "raid_level": "raid1", 00:11:17.651 "superblock": false, 00:11:17.651 "num_base_bdevs": 3, 00:11:17.651 "num_base_bdevs_discovered": 3, 00:11:17.651 "num_base_bdevs_operational": 3, 00:11:17.651 "base_bdevs_list": [ 00:11:17.651 { 00:11:17.651 "name": "BaseBdev1", 00:11:17.651 "uuid": "bcb75f14-9f06-469f-89eb-90d69ecfb1a7", 00:11:17.651 "is_configured": true, 00:11:17.651 "data_offset": 0, 00:11:17.651 "data_size": 65536 00:11:17.651 }, 00:11:17.651 { 00:11:17.651 "name": "BaseBdev2", 00:11:17.651 "uuid": "bf297d41-7e99-45b2-8fbe-f7287659c5cb", 00:11:17.651 "is_configured": true, 00:11:17.651 "data_offset": 0, 00:11:17.651 "data_size": 65536 00:11:17.651 }, 00:11:17.651 { 00:11:17.651 "name": "BaseBdev3", 00:11:17.651 "uuid": "a8684469-1a6b-425c-a58a-ed327e583763", 00:11:17.651 "is_configured": true, 00:11:17.651 "data_offset": 0, 00:11:17.651 "data_size": 65536 00:11:17.651 } 00:11:17.651 ] 00:11:17.651 } 00:11:17.651 } 00:11:17.651 }' 00:11:17.651 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.651 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:17.651 BaseBdev2 00:11:17.651 BaseBdev3' 00:11:17.651 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.652 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.652 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.652 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.652 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:17.652 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.652 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.652 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.652 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.652 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.652 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.911 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:17.911 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.911 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.911 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.911 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.911 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.911 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.911 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.911 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:17.911 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.911 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.911 11:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.911 11:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.911 [2024-11-05 11:27:17.027590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.911 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.170 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.170 "name": "Existed_Raid", 00:11:18.170 "uuid": "81585f09-750a-479b-932c-f5418448de16", 00:11:18.170 "strip_size_kb": 0, 00:11:18.170 "state": "online", 00:11:18.170 "raid_level": "raid1", 00:11:18.170 "superblock": false, 00:11:18.170 "num_base_bdevs": 3, 00:11:18.170 "num_base_bdevs_discovered": 2, 00:11:18.170 "num_base_bdevs_operational": 2, 00:11:18.170 "base_bdevs_list": [ 00:11:18.170 { 00:11:18.170 "name": null, 00:11:18.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.170 "is_configured": false, 00:11:18.170 "data_offset": 0, 00:11:18.170 "data_size": 65536 00:11:18.170 }, 00:11:18.170 { 00:11:18.170 "name": "BaseBdev2", 00:11:18.170 "uuid": "bf297d41-7e99-45b2-8fbe-f7287659c5cb", 00:11:18.170 "is_configured": true, 00:11:18.170 "data_offset": 0, 00:11:18.170 "data_size": 65536 00:11:18.170 }, 00:11:18.170 { 00:11:18.170 "name": "BaseBdev3", 00:11:18.170 "uuid": "a8684469-1a6b-425c-a58a-ed327e583763", 00:11:18.170 "is_configured": true, 00:11:18.170 "data_offset": 0, 00:11:18.170 "data_size": 65536 00:11:18.170 } 00:11:18.170 ] 00:11:18.170 }' 00:11:18.170 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.170 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.429 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:18.429 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.429 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.429 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.429 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.429 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.429 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.429 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.429 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.429 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:18.429 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.429 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.429 [2024-11-05 11:27:17.658066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.689 [2024-11-05 11:27:17.833389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:18.689 [2024-11-05 11:27:17.833499] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.689 [2024-11-05 11:27:17.942166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.689 [2024-11-05 11:27:17.942221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.689 [2024-11-05 11:27:17.942233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:18.689 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.949 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:18.949 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:18.949 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:18.949 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:18.949 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.949 11:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:18.949 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.949 11:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.949 BaseBdev2 00:11:18.949 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.950 [ 00:11:18.950 { 00:11:18.950 "name": "BaseBdev2", 00:11:18.950 "aliases": [ 00:11:18.950 "d095a462-e938-485c-a2bb-031d634c9a02" 00:11:18.950 ], 00:11:18.950 "product_name": "Malloc disk", 00:11:18.950 "block_size": 512, 00:11:18.950 "num_blocks": 65536, 00:11:18.950 "uuid": "d095a462-e938-485c-a2bb-031d634c9a02", 00:11:18.950 "assigned_rate_limits": { 00:11:18.950 "rw_ios_per_sec": 0, 00:11:18.950 "rw_mbytes_per_sec": 0, 00:11:18.950 "r_mbytes_per_sec": 0, 00:11:18.950 "w_mbytes_per_sec": 0 00:11:18.950 }, 00:11:18.950 "claimed": false, 00:11:18.950 "zoned": false, 00:11:18.950 "supported_io_types": { 00:11:18.950 "read": true, 00:11:18.950 "write": true, 00:11:18.950 "unmap": true, 00:11:18.950 "flush": true, 00:11:18.950 "reset": true, 00:11:18.950 "nvme_admin": false, 00:11:18.950 "nvme_io": false, 00:11:18.950 "nvme_io_md": false, 00:11:18.950 "write_zeroes": true, 00:11:18.950 "zcopy": true, 00:11:18.950 "get_zone_info": false, 00:11:18.950 "zone_management": false, 00:11:18.950 "zone_append": false, 00:11:18.950 "compare": false, 00:11:18.950 "compare_and_write": false, 00:11:18.950 "abort": true, 00:11:18.950 "seek_hole": false, 00:11:18.950 "seek_data": false, 00:11:18.950 "copy": true, 00:11:18.950 "nvme_iov_md": false 00:11:18.950 }, 00:11:18.950 "memory_domains": [ 00:11:18.950 { 00:11:18.950 "dma_device_id": "system", 00:11:18.950 "dma_device_type": 1 00:11:18.950 }, 00:11:18.950 { 00:11:18.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.950 "dma_device_type": 2 00:11:18.950 } 00:11:18.950 ], 00:11:18.950 "driver_specific": {} 00:11:18.950 } 00:11:18.950 ] 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.950 BaseBdev3 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.950 [ 00:11:18.950 { 00:11:18.950 "name": "BaseBdev3", 00:11:18.950 "aliases": [ 00:11:18.950 "25adc573-0f75-4d8c-84cd-145e64e4cfe4" 00:11:18.950 ], 00:11:18.950 "product_name": "Malloc disk", 00:11:18.950 "block_size": 512, 00:11:18.950 "num_blocks": 65536, 00:11:18.950 "uuid": "25adc573-0f75-4d8c-84cd-145e64e4cfe4", 00:11:18.950 "assigned_rate_limits": { 00:11:18.950 "rw_ios_per_sec": 0, 00:11:18.950 "rw_mbytes_per_sec": 0, 00:11:18.950 "r_mbytes_per_sec": 0, 00:11:18.950 "w_mbytes_per_sec": 0 00:11:18.950 }, 00:11:18.950 "claimed": false, 00:11:18.950 "zoned": false, 00:11:18.950 "supported_io_types": { 00:11:18.950 "read": true, 00:11:18.950 "write": true, 00:11:18.950 "unmap": true, 00:11:18.950 "flush": true, 00:11:18.950 "reset": true, 00:11:18.950 "nvme_admin": false, 00:11:18.950 "nvme_io": false, 00:11:18.950 "nvme_io_md": false, 00:11:18.950 "write_zeroes": true, 00:11:18.950 "zcopy": true, 00:11:18.950 "get_zone_info": false, 00:11:18.950 "zone_management": false, 00:11:18.950 "zone_append": false, 00:11:18.950 "compare": false, 00:11:18.950 "compare_and_write": false, 00:11:18.950 "abort": true, 00:11:18.950 "seek_hole": false, 00:11:18.950 "seek_data": false, 00:11:18.950 "copy": true, 00:11:18.950 "nvme_iov_md": false 00:11:18.950 }, 00:11:18.950 "memory_domains": [ 00:11:18.950 { 00:11:18.950 "dma_device_id": "system", 00:11:18.950 "dma_device_type": 1 00:11:18.950 }, 00:11:18.950 { 00:11:18.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.950 "dma_device_type": 2 00:11:18.950 } 00:11:18.950 ], 00:11:18.950 "driver_specific": {} 00:11:18.950 } 00:11:18.950 ] 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.950 [2024-11-05 11:27:18.172797] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.950 [2024-11-05 11:27:18.172904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.950 [2024-11-05 11:27:18.172934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.950 [2024-11-05 11:27:18.174825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.950 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.210 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.210 "name": "Existed_Raid", 00:11:19.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.210 "strip_size_kb": 0, 00:11:19.210 "state": "configuring", 00:11:19.210 "raid_level": "raid1", 00:11:19.210 "superblock": false, 00:11:19.210 "num_base_bdevs": 3, 00:11:19.210 "num_base_bdevs_discovered": 2, 00:11:19.210 "num_base_bdevs_operational": 3, 00:11:19.210 "base_bdevs_list": [ 00:11:19.210 { 00:11:19.210 "name": "BaseBdev1", 00:11:19.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.210 "is_configured": false, 00:11:19.210 "data_offset": 0, 00:11:19.210 "data_size": 0 00:11:19.210 }, 00:11:19.210 { 00:11:19.210 "name": "BaseBdev2", 00:11:19.210 "uuid": "d095a462-e938-485c-a2bb-031d634c9a02", 00:11:19.210 "is_configured": true, 00:11:19.210 "data_offset": 0, 00:11:19.210 "data_size": 65536 00:11:19.210 }, 00:11:19.210 { 00:11:19.210 "name": "BaseBdev3", 00:11:19.210 "uuid": "25adc573-0f75-4d8c-84cd-145e64e4cfe4", 00:11:19.210 "is_configured": true, 00:11:19.210 "data_offset": 0, 00:11:19.210 "data_size": 65536 00:11:19.210 } 00:11:19.210 ] 00:11:19.210 }' 00:11:19.210 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.210 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.470 [2024-11-05 11:27:18.624054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.470 "name": "Existed_Raid", 00:11:19.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.470 "strip_size_kb": 0, 00:11:19.470 "state": "configuring", 00:11:19.470 "raid_level": "raid1", 00:11:19.470 "superblock": false, 00:11:19.470 "num_base_bdevs": 3, 00:11:19.470 "num_base_bdevs_discovered": 1, 00:11:19.470 "num_base_bdevs_operational": 3, 00:11:19.470 "base_bdevs_list": [ 00:11:19.470 { 00:11:19.470 "name": "BaseBdev1", 00:11:19.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.470 "is_configured": false, 00:11:19.470 "data_offset": 0, 00:11:19.470 "data_size": 0 00:11:19.470 }, 00:11:19.470 { 00:11:19.470 "name": null, 00:11:19.470 "uuid": "d095a462-e938-485c-a2bb-031d634c9a02", 00:11:19.470 "is_configured": false, 00:11:19.470 "data_offset": 0, 00:11:19.470 "data_size": 65536 00:11:19.470 }, 00:11:19.470 { 00:11:19.470 "name": "BaseBdev3", 00:11:19.470 "uuid": "25adc573-0f75-4d8c-84cd-145e64e4cfe4", 00:11:19.470 "is_configured": true, 00:11:19.470 "data_offset": 0, 00:11:19.470 "data_size": 65536 00:11:19.470 } 00:11:19.470 ] 00:11:19.470 }' 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.470 11:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.039 [2024-11-05 11:27:19.121300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.039 BaseBdev1 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.039 [ 00:11:20.039 { 00:11:20.039 "name": "BaseBdev1", 00:11:20.039 "aliases": [ 00:11:20.039 "902b0408-f186-4433-97bb-1da995f5f886" 00:11:20.039 ], 00:11:20.039 "product_name": "Malloc disk", 00:11:20.039 "block_size": 512, 00:11:20.039 "num_blocks": 65536, 00:11:20.039 "uuid": "902b0408-f186-4433-97bb-1da995f5f886", 00:11:20.039 "assigned_rate_limits": { 00:11:20.039 "rw_ios_per_sec": 0, 00:11:20.039 "rw_mbytes_per_sec": 0, 00:11:20.039 "r_mbytes_per_sec": 0, 00:11:20.039 "w_mbytes_per_sec": 0 00:11:20.039 }, 00:11:20.039 "claimed": true, 00:11:20.039 "claim_type": "exclusive_write", 00:11:20.039 "zoned": false, 00:11:20.039 "supported_io_types": { 00:11:20.039 "read": true, 00:11:20.039 "write": true, 00:11:20.039 "unmap": true, 00:11:20.039 "flush": true, 00:11:20.039 "reset": true, 00:11:20.039 "nvme_admin": false, 00:11:20.039 "nvme_io": false, 00:11:20.039 "nvme_io_md": false, 00:11:20.039 "write_zeroes": true, 00:11:20.039 "zcopy": true, 00:11:20.039 "get_zone_info": false, 00:11:20.039 "zone_management": false, 00:11:20.039 "zone_append": false, 00:11:20.039 "compare": false, 00:11:20.039 "compare_and_write": false, 00:11:20.039 "abort": true, 00:11:20.039 "seek_hole": false, 00:11:20.039 "seek_data": false, 00:11:20.039 "copy": true, 00:11:20.039 "nvme_iov_md": false 00:11:20.039 }, 00:11:20.039 "memory_domains": [ 00:11:20.039 { 00:11:20.039 "dma_device_id": "system", 00:11:20.039 "dma_device_type": 1 00:11:20.039 }, 00:11:20.039 { 00:11:20.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.039 "dma_device_type": 2 00:11:20.039 } 00:11:20.039 ], 00:11:20.039 "driver_specific": {} 00:11:20.039 } 00:11:20.039 ] 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.039 "name": "Existed_Raid", 00:11:20.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.039 "strip_size_kb": 0, 00:11:20.039 "state": "configuring", 00:11:20.039 "raid_level": "raid1", 00:11:20.039 "superblock": false, 00:11:20.039 "num_base_bdevs": 3, 00:11:20.039 "num_base_bdevs_discovered": 2, 00:11:20.039 "num_base_bdevs_operational": 3, 00:11:20.039 "base_bdevs_list": [ 00:11:20.039 { 00:11:20.039 "name": "BaseBdev1", 00:11:20.039 "uuid": "902b0408-f186-4433-97bb-1da995f5f886", 00:11:20.039 "is_configured": true, 00:11:20.039 "data_offset": 0, 00:11:20.039 "data_size": 65536 00:11:20.039 }, 00:11:20.039 { 00:11:20.039 "name": null, 00:11:20.039 "uuid": "d095a462-e938-485c-a2bb-031d634c9a02", 00:11:20.039 "is_configured": false, 00:11:20.039 "data_offset": 0, 00:11:20.039 "data_size": 65536 00:11:20.039 }, 00:11:20.039 { 00:11:20.039 "name": "BaseBdev3", 00:11:20.039 "uuid": "25adc573-0f75-4d8c-84cd-145e64e4cfe4", 00:11:20.039 "is_configured": true, 00:11:20.039 "data_offset": 0, 00:11:20.039 "data_size": 65536 00:11:20.039 } 00:11:20.039 ] 00:11:20.039 }' 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.039 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.608 [2024-11-05 11:27:19.640539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.608 "name": "Existed_Raid", 00:11:20.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.608 "strip_size_kb": 0, 00:11:20.608 "state": "configuring", 00:11:20.608 "raid_level": "raid1", 00:11:20.608 "superblock": false, 00:11:20.608 "num_base_bdevs": 3, 00:11:20.608 "num_base_bdevs_discovered": 1, 00:11:20.608 "num_base_bdevs_operational": 3, 00:11:20.608 "base_bdevs_list": [ 00:11:20.608 { 00:11:20.608 "name": "BaseBdev1", 00:11:20.608 "uuid": "902b0408-f186-4433-97bb-1da995f5f886", 00:11:20.608 "is_configured": true, 00:11:20.608 "data_offset": 0, 00:11:20.608 "data_size": 65536 00:11:20.608 }, 00:11:20.608 { 00:11:20.608 "name": null, 00:11:20.608 "uuid": "d095a462-e938-485c-a2bb-031d634c9a02", 00:11:20.608 "is_configured": false, 00:11:20.608 "data_offset": 0, 00:11:20.608 "data_size": 65536 00:11:20.608 }, 00:11:20.608 { 00:11:20.608 "name": null, 00:11:20.608 "uuid": "25adc573-0f75-4d8c-84cd-145e64e4cfe4", 00:11:20.608 "is_configured": false, 00:11:20.608 "data_offset": 0, 00:11:20.608 "data_size": 65536 00:11:20.608 } 00:11:20.608 ] 00:11:20.608 }' 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.608 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.867 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.867 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:20.867 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.867 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.867 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.867 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:20.867 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:20.867 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.867 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.133 [2024-11-05 11:27:20.147730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.133 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.134 "name": "Existed_Raid", 00:11:21.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.134 "strip_size_kb": 0, 00:11:21.134 "state": "configuring", 00:11:21.134 "raid_level": "raid1", 00:11:21.134 "superblock": false, 00:11:21.134 "num_base_bdevs": 3, 00:11:21.134 "num_base_bdevs_discovered": 2, 00:11:21.134 "num_base_bdevs_operational": 3, 00:11:21.134 "base_bdevs_list": [ 00:11:21.134 { 00:11:21.134 "name": "BaseBdev1", 00:11:21.134 "uuid": "902b0408-f186-4433-97bb-1da995f5f886", 00:11:21.134 "is_configured": true, 00:11:21.134 "data_offset": 0, 00:11:21.134 "data_size": 65536 00:11:21.134 }, 00:11:21.134 { 00:11:21.134 "name": null, 00:11:21.134 "uuid": "d095a462-e938-485c-a2bb-031d634c9a02", 00:11:21.134 "is_configured": false, 00:11:21.134 "data_offset": 0, 00:11:21.134 "data_size": 65536 00:11:21.134 }, 00:11:21.134 { 00:11:21.134 "name": "BaseBdev3", 00:11:21.134 "uuid": "25adc573-0f75-4d8c-84cd-145e64e4cfe4", 00:11:21.134 "is_configured": true, 00:11:21.134 "data_offset": 0, 00:11:21.134 "data_size": 65536 00:11:21.134 } 00:11:21.134 ] 00:11:21.134 }' 00:11:21.134 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.134 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.402 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.402 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:21.402 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.402 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.402 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.402 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:21.402 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:21.402 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.402 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.402 [2024-11-05 11:27:20.643199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.662 "name": "Existed_Raid", 00:11:21.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.662 "strip_size_kb": 0, 00:11:21.662 "state": "configuring", 00:11:21.662 "raid_level": "raid1", 00:11:21.662 "superblock": false, 00:11:21.662 "num_base_bdevs": 3, 00:11:21.662 "num_base_bdevs_discovered": 1, 00:11:21.662 "num_base_bdevs_operational": 3, 00:11:21.662 "base_bdevs_list": [ 00:11:21.662 { 00:11:21.662 "name": null, 00:11:21.662 "uuid": "902b0408-f186-4433-97bb-1da995f5f886", 00:11:21.662 "is_configured": false, 00:11:21.662 "data_offset": 0, 00:11:21.662 "data_size": 65536 00:11:21.662 }, 00:11:21.662 { 00:11:21.662 "name": null, 00:11:21.662 "uuid": "d095a462-e938-485c-a2bb-031d634c9a02", 00:11:21.662 "is_configured": false, 00:11:21.662 "data_offset": 0, 00:11:21.662 "data_size": 65536 00:11:21.662 }, 00:11:21.662 { 00:11:21.662 "name": "BaseBdev3", 00:11:21.662 "uuid": "25adc573-0f75-4d8c-84cd-145e64e4cfe4", 00:11:21.662 "is_configured": true, 00:11:21.662 "data_offset": 0, 00:11:21.662 "data_size": 65536 00:11:21.662 } 00:11:21.662 ] 00:11:21.662 }' 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.662 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.231 [2024-11-05 11:27:21.278760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.231 "name": "Existed_Raid", 00:11:22.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.231 "strip_size_kb": 0, 00:11:22.231 "state": "configuring", 00:11:22.231 "raid_level": "raid1", 00:11:22.231 "superblock": false, 00:11:22.231 "num_base_bdevs": 3, 00:11:22.231 "num_base_bdevs_discovered": 2, 00:11:22.231 "num_base_bdevs_operational": 3, 00:11:22.231 "base_bdevs_list": [ 00:11:22.231 { 00:11:22.231 "name": null, 00:11:22.231 "uuid": "902b0408-f186-4433-97bb-1da995f5f886", 00:11:22.231 "is_configured": false, 00:11:22.231 "data_offset": 0, 00:11:22.231 "data_size": 65536 00:11:22.231 }, 00:11:22.231 { 00:11:22.231 "name": "BaseBdev2", 00:11:22.231 "uuid": "d095a462-e938-485c-a2bb-031d634c9a02", 00:11:22.231 "is_configured": true, 00:11:22.231 "data_offset": 0, 00:11:22.231 "data_size": 65536 00:11:22.231 }, 00:11:22.231 { 00:11:22.231 "name": "BaseBdev3", 00:11:22.231 "uuid": "25adc573-0f75-4d8c-84cd-145e64e4cfe4", 00:11:22.231 "is_configured": true, 00:11:22.231 "data_offset": 0, 00:11:22.231 "data_size": 65536 00:11:22.231 } 00:11:22.231 ] 00:11:22.231 }' 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.231 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.800 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.800 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.800 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.800 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:22.800 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.800 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:22.800 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.800 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.800 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.800 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:22.800 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.800 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 902b0408-f186-4433-97bb-1da995f5f886 00:11:22.800 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.800 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.800 [2024-11-05 11:27:21.914598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:22.800 [2024-11-05 11:27:21.914760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:22.800 [2024-11-05 11:27:21.914790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:22.800 [2024-11-05 11:27:21.915126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:22.800 [2024-11-05 11:27:21.915415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:22.800 [2024-11-05 11:27:21.915473] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:22.800 [2024-11-05 11:27:21.915845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.800 NewBaseBdev 00:11:22.800 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.801 [ 00:11:22.801 { 00:11:22.801 "name": "NewBaseBdev", 00:11:22.801 "aliases": [ 00:11:22.801 "902b0408-f186-4433-97bb-1da995f5f886" 00:11:22.801 ], 00:11:22.801 "product_name": "Malloc disk", 00:11:22.801 "block_size": 512, 00:11:22.801 "num_blocks": 65536, 00:11:22.801 "uuid": "902b0408-f186-4433-97bb-1da995f5f886", 00:11:22.801 "assigned_rate_limits": { 00:11:22.801 "rw_ios_per_sec": 0, 00:11:22.801 "rw_mbytes_per_sec": 0, 00:11:22.801 "r_mbytes_per_sec": 0, 00:11:22.801 "w_mbytes_per_sec": 0 00:11:22.801 }, 00:11:22.801 "claimed": true, 00:11:22.801 "claim_type": "exclusive_write", 00:11:22.801 "zoned": false, 00:11:22.801 "supported_io_types": { 00:11:22.801 "read": true, 00:11:22.801 "write": true, 00:11:22.801 "unmap": true, 00:11:22.801 "flush": true, 00:11:22.801 "reset": true, 00:11:22.801 "nvme_admin": false, 00:11:22.801 "nvme_io": false, 00:11:22.801 "nvme_io_md": false, 00:11:22.801 "write_zeroes": true, 00:11:22.801 "zcopy": true, 00:11:22.801 "get_zone_info": false, 00:11:22.801 "zone_management": false, 00:11:22.801 "zone_append": false, 00:11:22.801 "compare": false, 00:11:22.801 "compare_and_write": false, 00:11:22.801 "abort": true, 00:11:22.801 "seek_hole": false, 00:11:22.801 "seek_data": false, 00:11:22.801 "copy": true, 00:11:22.801 "nvme_iov_md": false 00:11:22.801 }, 00:11:22.801 "memory_domains": [ 00:11:22.801 { 00:11:22.801 "dma_device_id": "system", 00:11:22.801 "dma_device_type": 1 00:11:22.801 }, 00:11:22.801 { 00:11:22.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.801 "dma_device_type": 2 00:11:22.801 } 00:11:22.801 ], 00:11:22.801 "driver_specific": {} 00:11:22.801 } 00:11:22.801 ] 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.801 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.801 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.801 "name": "Existed_Raid", 00:11:22.801 "uuid": "83c1252a-d49d-4dc9-a3df-1dd99dd0ed0a", 00:11:22.801 "strip_size_kb": 0, 00:11:22.801 "state": "online", 00:11:22.801 "raid_level": "raid1", 00:11:22.801 "superblock": false, 00:11:22.801 "num_base_bdevs": 3, 00:11:22.801 "num_base_bdevs_discovered": 3, 00:11:22.801 "num_base_bdevs_operational": 3, 00:11:22.801 "base_bdevs_list": [ 00:11:22.801 { 00:11:22.801 "name": "NewBaseBdev", 00:11:22.801 "uuid": "902b0408-f186-4433-97bb-1da995f5f886", 00:11:22.801 "is_configured": true, 00:11:22.801 "data_offset": 0, 00:11:22.801 "data_size": 65536 00:11:22.801 }, 00:11:22.801 { 00:11:22.801 "name": "BaseBdev2", 00:11:22.801 "uuid": "d095a462-e938-485c-a2bb-031d634c9a02", 00:11:22.801 "is_configured": true, 00:11:22.801 "data_offset": 0, 00:11:22.801 "data_size": 65536 00:11:22.801 }, 00:11:22.801 { 00:11:22.801 "name": "BaseBdev3", 00:11:22.801 "uuid": "25adc573-0f75-4d8c-84cd-145e64e4cfe4", 00:11:22.801 "is_configured": true, 00:11:22.801 "data_offset": 0, 00:11:22.801 "data_size": 65536 00:11:22.801 } 00:11:22.801 ] 00:11:22.801 }' 00:11:22.801 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.801 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.371 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:23.371 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:23.371 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:23.371 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:23.371 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:23.371 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:23.371 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:23.371 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.371 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.371 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:23.371 [2024-11-05 11:27:22.446150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.371 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.371 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.371 "name": "Existed_Raid", 00:11:23.371 "aliases": [ 00:11:23.371 "83c1252a-d49d-4dc9-a3df-1dd99dd0ed0a" 00:11:23.371 ], 00:11:23.371 "product_name": "Raid Volume", 00:11:23.371 "block_size": 512, 00:11:23.371 "num_blocks": 65536, 00:11:23.371 "uuid": "83c1252a-d49d-4dc9-a3df-1dd99dd0ed0a", 00:11:23.371 "assigned_rate_limits": { 00:11:23.371 "rw_ios_per_sec": 0, 00:11:23.371 "rw_mbytes_per_sec": 0, 00:11:23.371 "r_mbytes_per_sec": 0, 00:11:23.371 "w_mbytes_per_sec": 0 00:11:23.371 }, 00:11:23.371 "claimed": false, 00:11:23.371 "zoned": false, 00:11:23.371 "supported_io_types": { 00:11:23.371 "read": true, 00:11:23.371 "write": true, 00:11:23.371 "unmap": false, 00:11:23.371 "flush": false, 00:11:23.371 "reset": true, 00:11:23.371 "nvme_admin": false, 00:11:23.371 "nvme_io": false, 00:11:23.371 "nvme_io_md": false, 00:11:23.371 "write_zeroes": true, 00:11:23.371 "zcopy": false, 00:11:23.371 "get_zone_info": false, 00:11:23.371 "zone_management": false, 00:11:23.371 "zone_append": false, 00:11:23.371 "compare": false, 00:11:23.371 "compare_and_write": false, 00:11:23.371 "abort": false, 00:11:23.371 "seek_hole": false, 00:11:23.371 "seek_data": false, 00:11:23.371 "copy": false, 00:11:23.371 "nvme_iov_md": false 00:11:23.371 }, 00:11:23.371 "memory_domains": [ 00:11:23.371 { 00:11:23.371 "dma_device_id": "system", 00:11:23.371 "dma_device_type": 1 00:11:23.371 }, 00:11:23.371 { 00:11:23.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.371 "dma_device_type": 2 00:11:23.371 }, 00:11:23.371 { 00:11:23.371 "dma_device_id": "system", 00:11:23.371 "dma_device_type": 1 00:11:23.371 }, 00:11:23.371 { 00:11:23.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.371 "dma_device_type": 2 00:11:23.371 }, 00:11:23.371 { 00:11:23.371 "dma_device_id": "system", 00:11:23.371 "dma_device_type": 1 00:11:23.371 }, 00:11:23.371 { 00:11:23.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.371 "dma_device_type": 2 00:11:23.371 } 00:11:23.371 ], 00:11:23.371 "driver_specific": { 00:11:23.371 "raid": { 00:11:23.371 "uuid": "83c1252a-d49d-4dc9-a3df-1dd99dd0ed0a", 00:11:23.371 "strip_size_kb": 0, 00:11:23.371 "state": "online", 00:11:23.371 "raid_level": "raid1", 00:11:23.371 "superblock": false, 00:11:23.371 "num_base_bdevs": 3, 00:11:23.371 "num_base_bdevs_discovered": 3, 00:11:23.371 "num_base_bdevs_operational": 3, 00:11:23.371 "base_bdevs_list": [ 00:11:23.371 { 00:11:23.371 "name": "NewBaseBdev", 00:11:23.371 "uuid": "902b0408-f186-4433-97bb-1da995f5f886", 00:11:23.371 "is_configured": true, 00:11:23.371 "data_offset": 0, 00:11:23.372 "data_size": 65536 00:11:23.372 }, 00:11:23.372 { 00:11:23.372 "name": "BaseBdev2", 00:11:23.372 "uuid": "d095a462-e938-485c-a2bb-031d634c9a02", 00:11:23.372 "is_configured": true, 00:11:23.372 "data_offset": 0, 00:11:23.372 "data_size": 65536 00:11:23.372 }, 00:11:23.372 { 00:11:23.372 "name": "BaseBdev3", 00:11:23.372 "uuid": "25adc573-0f75-4d8c-84cd-145e64e4cfe4", 00:11:23.372 "is_configured": true, 00:11:23.372 "data_offset": 0, 00:11:23.372 "data_size": 65536 00:11:23.372 } 00:11:23.372 ] 00:11:23.372 } 00:11:23.372 } 00:11:23.372 }' 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:23.372 BaseBdev2 00:11:23.372 BaseBdev3' 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.372 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.632 [2024-11-05 11:27:22.737286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.632 [2024-11-05 11:27:22.737322] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.632 [2024-11-05 11:27:22.737416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.632 [2024-11-05 11:27:22.737755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.632 [2024-11-05 11:27:22.737769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67523 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67523 ']' 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67523 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67523 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67523' 00:11:23.632 killing process with pid 67523 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67523 00:11:23.632 [2024-11-05 11:27:22.786807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.632 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67523 00:11:23.892 [2024-11-05 11:27:23.135380] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:25.273 ************************************ 00:11:25.273 END TEST raid_state_function_test 00:11:25.273 ************************************ 00:11:25.273 00:11:25.273 real 0m11.285s 00:11:25.273 user 0m17.916s 00:11:25.273 sys 0m1.870s 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.273 11:27:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:25.273 11:27:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:25.273 11:27:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:25.273 11:27:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.273 ************************************ 00:11:25.273 START TEST raid_state_function_test_sb 00:11:25.273 ************************************ 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68155 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68155' 00:11:25.273 Process raid pid: 68155 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68155 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 68155 ']' 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:25.273 11:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.533 [2024-11-05 11:27:24.557138] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:11:25.533 [2024-11-05 11:27:24.557277] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.533 [2024-11-05 11:27:24.738197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.793 [2024-11-05 11:27:24.867575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.052 [2024-11-05 11:27:25.097550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.052 [2024-11-05 11:27:25.097601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.310 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:26.310 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:26.310 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:26.310 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.310 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.310 [2024-11-05 11:27:25.444465] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.311 [2024-11-05 11:27:25.444600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.311 [2024-11-05 11:27:25.444616] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.311 [2024-11-05 11:27:25.444628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.311 [2024-11-05 11:27:25.444636] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:26.311 [2024-11-05 11:27:25.444645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.311 "name": "Existed_Raid", 00:11:26.311 "uuid": "b2c42da4-800a-4457-9dbc-2e3ead1c1cba", 00:11:26.311 "strip_size_kb": 0, 00:11:26.311 "state": "configuring", 00:11:26.311 "raid_level": "raid1", 00:11:26.311 "superblock": true, 00:11:26.311 "num_base_bdevs": 3, 00:11:26.311 "num_base_bdevs_discovered": 0, 00:11:26.311 "num_base_bdevs_operational": 3, 00:11:26.311 "base_bdevs_list": [ 00:11:26.311 { 00:11:26.311 "name": "BaseBdev1", 00:11:26.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.311 "is_configured": false, 00:11:26.311 "data_offset": 0, 00:11:26.311 "data_size": 0 00:11:26.311 }, 00:11:26.311 { 00:11:26.311 "name": "BaseBdev2", 00:11:26.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.311 "is_configured": false, 00:11:26.311 "data_offset": 0, 00:11:26.311 "data_size": 0 00:11:26.311 }, 00:11:26.311 { 00:11:26.311 "name": "BaseBdev3", 00:11:26.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.311 "is_configured": false, 00:11:26.311 "data_offset": 0, 00:11:26.311 "data_size": 0 00:11:26.311 } 00:11:26.311 ] 00:11:26.311 }' 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.311 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.883 [2024-11-05 11:27:25.903662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.883 [2024-11-05 11:27:25.903775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.883 [2024-11-05 11:27:25.915612] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.883 [2024-11-05 11:27:25.915712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.883 [2024-11-05 11:27:25.915749] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.883 [2024-11-05 11:27:25.915777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.883 [2024-11-05 11:27:25.915849] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:26.883 [2024-11-05 11:27:25.915877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.883 [2024-11-05 11:27:25.966554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.883 BaseBdev1 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.883 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:26.884 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.884 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.884 [ 00:11:26.884 { 00:11:26.884 "name": "BaseBdev1", 00:11:26.884 "aliases": [ 00:11:26.884 "3767528f-6741-4756-8b65-1d571fc237b1" 00:11:26.884 ], 00:11:26.884 "product_name": "Malloc disk", 00:11:26.884 "block_size": 512, 00:11:26.884 "num_blocks": 65536, 00:11:26.884 "uuid": "3767528f-6741-4756-8b65-1d571fc237b1", 00:11:26.884 "assigned_rate_limits": { 00:11:26.884 "rw_ios_per_sec": 0, 00:11:26.884 "rw_mbytes_per_sec": 0, 00:11:26.884 "r_mbytes_per_sec": 0, 00:11:26.884 "w_mbytes_per_sec": 0 00:11:26.884 }, 00:11:26.884 "claimed": true, 00:11:26.884 "claim_type": "exclusive_write", 00:11:26.884 "zoned": false, 00:11:26.884 "supported_io_types": { 00:11:26.884 "read": true, 00:11:26.884 "write": true, 00:11:26.884 "unmap": true, 00:11:26.884 "flush": true, 00:11:26.884 "reset": true, 00:11:26.884 "nvme_admin": false, 00:11:26.884 "nvme_io": false, 00:11:26.884 "nvme_io_md": false, 00:11:26.884 "write_zeroes": true, 00:11:26.884 "zcopy": true, 00:11:26.884 "get_zone_info": false, 00:11:26.884 "zone_management": false, 00:11:26.884 "zone_append": false, 00:11:26.884 "compare": false, 00:11:26.884 "compare_and_write": false, 00:11:26.884 "abort": true, 00:11:26.884 "seek_hole": false, 00:11:26.884 "seek_data": false, 00:11:26.884 "copy": true, 00:11:26.884 "nvme_iov_md": false 00:11:26.884 }, 00:11:26.884 "memory_domains": [ 00:11:26.884 { 00:11:26.884 "dma_device_id": "system", 00:11:26.884 "dma_device_type": 1 00:11:26.884 }, 00:11:26.884 { 00:11:26.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.884 "dma_device_type": 2 00:11:26.884 } 00:11:26.884 ], 00:11:26.884 "driver_specific": {} 00:11:26.884 } 00:11:26.884 ] 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.884 "name": "Existed_Raid", 00:11:26.884 "uuid": "814f1576-bcf6-48f2-95dd-8dce50ba4ae5", 00:11:26.884 "strip_size_kb": 0, 00:11:26.884 "state": "configuring", 00:11:26.884 "raid_level": "raid1", 00:11:26.884 "superblock": true, 00:11:26.884 "num_base_bdevs": 3, 00:11:26.884 "num_base_bdevs_discovered": 1, 00:11:26.884 "num_base_bdevs_operational": 3, 00:11:26.884 "base_bdevs_list": [ 00:11:26.884 { 00:11:26.884 "name": "BaseBdev1", 00:11:26.884 "uuid": "3767528f-6741-4756-8b65-1d571fc237b1", 00:11:26.884 "is_configured": true, 00:11:26.884 "data_offset": 2048, 00:11:26.884 "data_size": 63488 00:11:26.884 }, 00:11:26.884 { 00:11:26.884 "name": "BaseBdev2", 00:11:26.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.884 "is_configured": false, 00:11:26.884 "data_offset": 0, 00:11:26.884 "data_size": 0 00:11:26.884 }, 00:11:26.884 { 00:11:26.884 "name": "BaseBdev3", 00:11:26.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.884 "is_configured": false, 00:11:26.884 "data_offset": 0, 00:11:26.884 "data_size": 0 00:11:26.884 } 00:11:26.884 ] 00:11:26.884 }' 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.884 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.452 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:27.452 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.452 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.452 [2024-11-05 11:27:26.477773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.452 [2024-11-05 11:27:26.477834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:27.452 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.452 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:27.452 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.452 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.452 [2024-11-05 11:27:26.485814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.452 [2024-11-05 11:27:26.487896] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.452 [2024-11-05 11:27:26.487999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.452 [2024-11-05 11:27:26.488017] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.452 [2024-11-05 11:27:26.488029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.452 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.452 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:27.452 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:27.452 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:27.452 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.452 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.453 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.453 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.453 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.453 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.453 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.453 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.453 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.453 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.453 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.453 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.453 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.453 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.453 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.453 "name": "Existed_Raid", 00:11:27.453 "uuid": "2599d898-a52a-4734-b9a7-22721eda81a5", 00:11:27.453 "strip_size_kb": 0, 00:11:27.453 "state": "configuring", 00:11:27.453 "raid_level": "raid1", 00:11:27.453 "superblock": true, 00:11:27.453 "num_base_bdevs": 3, 00:11:27.453 "num_base_bdevs_discovered": 1, 00:11:27.453 "num_base_bdevs_operational": 3, 00:11:27.453 "base_bdevs_list": [ 00:11:27.453 { 00:11:27.453 "name": "BaseBdev1", 00:11:27.453 "uuid": "3767528f-6741-4756-8b65-1d571fc237b1", 00:11:27.453 "is_configured": true, 00:11:27.453 "data_offset": 2048, 00:11:27.453 "data_size": 63488 00:11:27.453 }, 00:11:27.453 { 00:11:27.453 "name": "BaseBdev2", 00:11:27.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.453 "is_configured": false, 00:11:27.453 "data_offset": 0, 00:11:27.453 "data_size": 0 00:11:27.453 }, 00:11:27.453 { 00:11:27.453 "name": "BaseBdev3", 00:11:27.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.453 "is_configured": false, 00:11:27.453 "data_offset": 0, 00:11:27.453 "data_size": 0 00:11:27.453 } 00:11:27.453 ] 00:11:27.453 }' 00:11:27.453 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.453 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.713 [2024-11-05 11:27:26.954663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.713 BaseBdev2 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.713 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.713 [ 00:11:27.713 { 00:11:27.713 "name": "BaseBdev2", 00:11:27.713 "aliases": [ 00:11:27.713 "5129c141-df27-42ff-b62b-caffaf3f6fe0" 00:11:27.713 ], 00:11:27.713 "product_name": "Malloc disk", 00:11:27.713 "block_size": 512, 00:11:27.713 "num_blocks": 65536, 00:11:27.713 "uuid": "5129c141-df27-42ff-b62b-caffaf3f6fe0", 00:11:27.713 "assigned_rate_limits": { 00:11:27.713 "rw_ios_per_sec": 0, 00:11:27.713 "rw_mbytes_per_sec": 0, 00:11:27.713 "r_mbytes_per_sec": 0, 00:11:27.713 "w_mbytes_per_sec": 0 00:11:27.713 }, 00:11:27.713 "claimed": true, 00:11:27.713 "claim_type": "exclusive_write", 00:11:27.713 "zoned": false, 00:11:27.713 "supported_io_types": { 00:11:27.713 "read": true, 00:11:27.713 "write": true, 00:11:27.713 "unmap": true, 00:11:27.713 "flush": true, 00:11:27.713 "reset": true, 00:11:27.713 "nvme_admin": false, 00:11:27.713 "nvme_io": false, 00:11:27.713 "nvme_io_md": false, 00:11:27.713 "write_zeroes": true, 00:11:27.713 "zcopy": true, 00:11:27.713 "get_zone_info": false, 00:11:27.713 "zone_management": false, 00:11:27.972 "zone_append": false, 00:11:27.972 "compare": false, 00:11:27.972 "compare_and_write": false, 00:11:27.972 "abort": true, 00:11:27.972 "seek_hole": false, 00:11:27.972 "seek_data": false, 00:11:27.972 "copy": true, 00:11:27.972 "nvme_iov_md": false 00:11:27.972 }, 00:11:27.972 "memory_domains": [ 00:11:27.972 { 00:11:27.972 "dma_device_id": "system", 00:11:27.972 "dma_device_type": 1 00:11:27.972 }, 00:11:27.972 { 00:11:27.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.972 "dma_device_type": 2 00:11:27.972 } 00:11:27.972 ], 00:11:27.972 "driver_specific": {} 00:11:27.972 } 00:11:27.972 ] 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.972 11:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.972 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.972 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.972 "name": "Existed_Raid", 00:11:27.972 "uuid": "2599d898-a52a-4734-b9a7-22721eda81a5", 00:11:27.972 "strip_size_kb": 0, 00:11:27.972 "state": "configuring", 00:11:27.972 "raid_level": "raid1", 00:11:27.972 "superblock": true, 00:11:27.972 "num_base_bdevs": 3, 00:11:27.972 "num_base_bdevs_discovered": 2, 00:11:27.972 "num_base_bdevs_operational": 3, 00:11:27.972 "base_bdevs_list": [ 00:11:27.972 { 00:11:27.972 "name": "BaseBdev1", 00:11:27.972 "uuid": "3767528f-6741-4756-8b65-1d571fc237b1", 00:11:27.972 "is_configured": true, 00:11:27.972 "data_offset": 2048, 00:11:27.972 "data_size": 63488 00:11:27.972 }, 00:11:27.972 { 00:11:27.972 "name": "BaseBdev2", 00:11:27.972 "uuid": "5129c141-df27-42ff-b62b-caffaf3f6fe0", 00:11:27.972 "is_configured": true, 00:11:27.972 "data_offset": 2048, 00:11:27.972 "data_size": 63488 00:11:27.972 }, 00:11:27.972 { 00:11:27.972 "name": "BaseBdev3", 00:11:27.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.972 "is_configured": false, 00:11:27.972 "data_offset": 0, 00:11:27.972 "data_size": 0 00:11:27.972 } 00:11:27.972 ] 00:11:27.972 }' 00:11:27.972 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.972 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.231 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:28.231 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.231 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.490 [2024-11-05 11:27:27.541221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.490 [2024-11-05 11:27:27.541570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:28.490 [2024-11-05 11:27:27.541636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:28.490 [2024-11-05 11:27:27.541968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:28.490 BaseBdev3 00:11:28.490 [2024-11-05 11:27:27.542200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:28.490 [2024-11-05 11:27:27.542213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:28.490 [2024-11-05 11:27:27.542371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.490 [ 00:11:28.490 { 00:11:28.490 "name": "BaseBdev3", 00:11:28.490 "aliases": [ 00:11:28.490 "14abdeae-9cd5-4cdd-9983-021fec4c9ca9" 00:11:28.490 ], 00:11:28.490 "product_name": "Malloc disk", 00:11:28.490 "block_size": 512, 00:11:28.490 "num_blocks": 65536, 00:11:28.490 "uuid": "14abdeae-9cd5-4cdd-9983-021fec4c9ca9", 00:11:28.490 "assigned_rate_limits": { 00:11:28.490 "rw_ios_per_sec": 0, 00:11:28.490 "rw_mbytes_per_sec": 0, 00:11:28.490 "r_mbytes_per_sec": 0, 00:11:28.490 "w_mbytes_per_sec": 0 00:11:28.490 }, 00:11:28.490 "claimed": true, 00:11:28.490 "claim_type": "exclusive_write", 00:11:28.490 "zoned": false, 00:11:28.490 "supported_io_types": { 00:11:28.490 "read": true, 00:11:28.490 "write": true, 00:11:28.490 "unmap": true, 00:11:28.490 "flush": true, 00:11:28.490 "reset": true, 00:11:28.490 "nvme_admin": false, 00:11:28.490 "nvme_io": false, 00:11:28.490 "nvme_io_md": false, 00:11:28.490 "write_zeroes": true, 00:11:28.490 "zcopy": true, 00:11:28.490 "get_zone_info": false, 00:11:28.490 "zone_management": false, 00:11:28.490 "zone_append": false, 00:11:28.490 "compare": false, 00:11:28.490 "compare_and_write": false, 00:11:28.490 "abort": true, 00:11:28.490 "seek_hole": false, 00:11:28.490 "seek_data": false, 00:11:28.490 "copy": true, 00:11:28.490 "nvme_iov_md": false 00:11:28.490 }, 00:11:28.490 "memory_domains": [ 00:11:28.490 { 00:11:28.490 "dma_device_id": "system", 00:11:28.490 "dma_device_type": 1 00:11:28.490 }, 00:11:28.490 { 00:11:28.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.490 "dma_device_type": 2 00:11:28.490 } 00:11:28.490 ], 00:11:28.490 "driver_specific": {} 00:11:28.490 } 00:11:28.490 ] 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.490 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.490 "name": "Existed_Raid", 00:11:28.490 "uuid": "2599d898-a52a-4734-b9a7-22721eda81a5", 00:11:28.490 "strip_size_kb": 0, 00:11:28.490 "state": "online", 00:11:28.490 "raid_level": "raid1", 00:11:28.491 "superblock": true, 00:11:28.491 "num_base_bdevs": 3, 00:11:28.491 "num_base_bdevs_discovered": 3, 00:11:28.491 "num_base_bdevs_operational": 3, 00:11:28.491 "base_bdevs_list": [ 00:11:28.491 { 00:11:28.491 "name": "BaseBdev1", 00:11:28.491 "uuid": "3767528f-6741-4756-8b65-1d571fc237b1", 00:11:28.491 "is_configured": true, 00:11:28.491 "data_offset": 2048, 00:11:28.491 "data_size": 63488 00:11:28.491 }, 00:11:28.491 { 00:11:28.491 "name": "BaseBdev2", 00:11:28.491 "uuid": "5129c141-df27-42ff-b62b-caffaf3f6fe0", 00:11:28.491 "is_configured": true, 00:11:28.491 "data_offset": 2048, 00:11:28.491 "data_size": 63488 00:11:28.491 }, 00:11:28.491 { 00:11:28.491 "name": "BaseBdev3", 00:11:28.491 "uuid": "14abdeae-9cd5-4cdd-9983-021fec4c9ca9", 00:11:28.491 "is_configured": true, 00:11:28.491 "data_offset": 2048, 00:11:28.491 "data_size": 63488 00:11:28.491 } 00:11:28.491 ] 00:11:28.491 }' 00:11:28.491 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.491 11:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.057 [2024-11-05 11:27:28.056750] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.057 "name": "Existed_Raid", 00:11:29.057 "aliases": [ 00:11:29.057 "2599d898-a52a-4734-b9a7-22721eda81a5" 00:11:29.057 ], 00:11:29.057 "product_name": "Raid Volume", 00:11:29.057 "block_size": 512, 00:11:29.057 "num_blocks": 63488, 00:11:29.057 "uuid": "2599d898-a52a-4734-b9a7-22721eda81a5", 00:11:29.057 "assigned_rate_limits": { 00:11:29.057 "rw_ios_per_sec": 0, 00:11:29.057 "rw_mbytes_per_sec": 0, 00:11:29.057 "r_mbytes_per_sec": 0, 00:11:29.057 "w_mbytes_per_sec": 0 00:11:29.057 }, 00:11:29.057 "claimed": false, 00:11:29.057 "zoned": false, 00:11:29.057 "supported_io_types": { 00:11:29.057 "read": true, 00:11:29.057 "write": true, 00:11:29.057 "unmap": false, 00:11:29.057 "flush": false, 00:11:29.057 "reset": true, 00:11:29.057 "nvme_admin": false, 00:11:29.057 "nvme_io": false, 00:11:29.057 "nvme_io_md": false, 00:11:29.057 "write_zeroes": true, 00:11:29.057 "zcopy": false, 00:11:29.057 "get_zone_info": false, 00:11:29.057 "zone_management": false, 00:11:29.057 "zone_append": false, 00:11:29.057 "compare": false, 00:11:29.057 "compare_and_write": false, 00:11:29.057 "abort": false, 00:11:29.057 "seek_hole": false, 00:11:29.057 "seek_data": false, 00:11:29.057 "copy": false, 00:11:29.057 "nvme_iov_md": false 00:11:29.057 }, 00:11:29.057 "memory_domains": [ 00:11:29.057 { 00:11:29.057 "dma_device_id": "system", 00:11:29.057 "dma_device_type": 1 00:11:29.057 }, 00:11:29.057 { 00:11:29.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.057 "dma_device_type": 2 00:11:29.057 }, 00:11:29.057 { 00:11:29.057 "dma_device_id": "system", 00:11:29.057 "dma_device_type": 1 00:11:29.057 }, 00:11:29.057 { 00:11:29.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.057 "dma_device_type": 2 00:11:29.057 }, 00:11:29.057 { 00:11:29.057 "dma_device_id": "system", 00:11:29.057 "dma_device_type": 1 00:11:29.057 }, 00:11:29.057 { 00:11:29.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.057 "dma_device_type": 2 00:11:29.057 } 00:11:29.057 ], 00:11:29.057 "driver_specific": { 00:11:29.057 "raid": { 00:11:29.057 "uuid": "2599d898-a52a-4734-b9a7-22721eda81a5", 00:11:29.057 "strip_size_kb": 0, 00:11:29.057 "state": "online", 00:11:29.057 "raid_level": "raid1", 00:11:29.057 "superblock": true, 00:11:29.057 "num_base_bdevs": 3, 00:11:29.057 "num_base_bdevs_discovered": 3, 00:11:29.057 "num_base_bdevs_operational": 3, 00:11:29.057 "base_bdevs_list": [ 00:11:29.057 { 00:11:29.057 "name": "BaseBdev1", 00:11:29.057 "uuid": "3767528f-6741-4756-8b65-1d571fc237b1", 00:11:29.057 "is_configured": true, 00:11:29.057 "data_offset": 2048, 00:11:29.057 "data_size": 63488 00:11:29.057 }, 00:11:29.057 { 00:11:29.057 "name": "BaseBdev2", 00:11:29.057 "uuid": "5129c141-df27-42ff-b62b-caffaf3f6fe0", 00:11:29.057 "is_configured": true, 00:11:29.057 "data_offset": 2048, 00:11:29.057 "data_size": 63488 00:11:29.057 }, 00:11:29.057 { 00:11:29.057 "name": "BaseBdev3", 00:11:29.057 "uuid": "14abdeae-9cd5-4cdd-9983-021fec4c9ca9", 00:11:29.057 "is_configured": true, 00:11:29.057 "data_offset": 2048, 00:11:29.057 "data_size": 63488 00:11:29.057 } 00:11:29.057 ] 00:11:29.057 } 00:11:29.057 } 00:11:29.057 }' 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:29.057 BaseBdev2 00:11:29.057 BaseBdev3' 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.057 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.317 [2024-11-05 11:27:28.351999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.317 "name": "Existed_Raid", 00:11:29.317 "uuid": "2599d898-a52a-4734-b9a7-22721eda81a5", 00:11:29.317 "strip_size_kb": 0, 00:11:29.317 "state": "online", 00:11:29.317 "raid_level": "raid1", 00:11:29.317 "superblock": true, 00:11:29.317 "num_base_bdevs": 3, 00:11:29.317 "num_base_bdevs_discovered": 2, 00:11:29.317 "num_base_bdevs_operational": 2, 00:11:29.317 "base_bdevs_list": [ 00:11:29.317 { 00:11:29.317 "name": null, 00:11:29.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.317 "is_configured": false, 00:11:29.317 "data_offset": 0, 00:11:29.317 "data_size": 63488 00:11:29.317 }, 00:11:29.317 { 00:11:29.317 "name": "BaseBdev2", 00:11:29.317 "uuid": "5129c141-df27-42ff-b62b-caffaf3f6fe0", 00:11:29.317 "is_configured": true, 00:11:29.317 "data_offset": 2048, 00:11:29.317 "data_size": 63488 00:11:29.317 }, 00:11:29.317 { 00:11:29.317 "name": "BaseBdev3", 00:11:29.317 "uuid": "14abdeae-9cd5-4cdd-9983-021fec4c9ca9", 00:11:29.317 "is_configured": true, 00:11:29.317 "data_offset": 2048, 00:11:29.317 "data_size": 63488 00:11:29.317 } 00:11:29.317 ] 00:11:29.317 }' 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.317 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.886 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:29.886 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:29.886 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.886 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:29.886 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.886 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.886 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.886 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:29.886 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:29.886 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:29.886 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.886 11:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.886 [2024-11-05 11:27:28.960609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:29.886 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.886 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:29.886 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:29.886 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.886 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.886 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:29.886 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.886 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.886 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:29.886 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:29.886 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:29.886 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.886 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.886 [2024-11-05 11:27:29.121507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:29.886 [2024-11-05 11:27:29.121686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.146 [2024-11-05 11:27:29.224641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.146 [2024-11-05 11:27:29.224788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.146 [2024-11-05 11:27:29.224832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.146 BaseBdev2 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.146 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.146 [ 00:11:30.146 { 00:11:30.146 "name": "BaseBdev2", 00:11:30.146 "aliases": [ 00:11:30.146 "7e48ca4a-5f83-4711-9ff1-4c358abf8c7b" 00:11:30.146 ], 00:11:30.146 "product_name": "Malloc disk", 00:11:30.146 "block_size": 512, 00:11:30.146 "num_blocks": 65536, 00:11:30.146 "uuid": "7e48ca4a-5f83-4711-9ff1-4c358abf8c7b", 00:11:30.146 "assigned_rate_limits": { 00:11:30.146 "rw_ios_per_sec": 0, 00:11:30.146 "rw_mbytes_per_sec": 0, 00:11:30.146 "r_mbytes_per_sec": 0, 00:11:30.146 "w_mbytes_per_sec": 0 00:11:30.146 }, 00:11:30.146 "claimed": false, 00:11:30.146 "zoned": false, 00:11:30.146 "supported_io_types": { 00:11:30.146 "read": true, 00:11:30.146 "write": true, 00:11:30.146 "unmap": true, 00:11:30.146 "flush": true, 00:11:30.146 "reset": true, 00:11:30.146 "nvme_admin": false, 00:11:30.146 "nvme_io": false, 00:11:30.146 "nvme_io_md": false, 00:11:30.146 "write_zeroes": true, 00:11:30.146 "zcopy": true, 00:11:30.146 "get_zone_info": false, 00:11:30.146 "zone_management": false, 00:11:30.146 "zone_append": false, 00:11:30.146 "compare": false, 00:11:30.146 "compare_and_write": false, 00:11:30.146 "abort": true, 00:11:30.146 "seek_hole": false, 00:11:30.146 "seek_data": false, 00:11:30.146 "copy": true, 00:11:30.146 "nvme_iov_md": false 00:11:30.147 }, 00:11:30.147 "memory_domains": [ 00:11:30.147 { 00:11:30.147 "dma_device_id": "system", 00:11:30.147 "dma_device_type": 1 00:11:30.147 }, 00:11:30.147 { 00:11:30.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.147 "dma_device_type": 2 00:11:30.147 } 00:11:30.147 ], 00:11:30.147 "driver_specific": {} 00:11:30.147 } 00:11:30.147 ] 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.147 BaseBdev3 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.147 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.407 [ 00:11:30.407 { 00:11:30.407 "name": "BaseBdev3", 00:11:30.407 "aliases": [ 00:11:30.407 "bbc65024-321c-42cb-927a-e4a61f13d2a1" 00:11:30.407 ], 00:11:30.407 "product_name": "Malloc disk", 00:11:30.407 "block_size": 512, 00:11:30.407 "num_blocks": 65536, 00:11:30.407 "uuid": "bbc65024-321c-42cb-927a-e4a61f13d2a1", 00:11:30.407 "assigned_rate_limits": { 00:11:30.407 "rw_ios_per_sec": 0, 00:11:30.407 "rw_mbytes_per_sec": 0, 00:11:30.407 "r_mbytes_per_sec": 0, 00:11:30.407 "w_mbytes_per_sec": 0 00:11:30.407 }, 00:11:30.407 "claimed": false, 00:11:30.407 "zoned": false, 00:11:30.407 "supported_io_types": { 00:11:30.407 "read": true, 00:11:30.407 "write": true, 00:11:30.407 "unmap": true, 00:11:30.407 "flush": true, 00:11:30.407 "reset": true, 00:11:30.407 "nvme_admin": false, 00:11:30.407 "nvme_io": false, 00:11:30.407 "nvme_io_md": false, 00:11:30.407 "write_zeroes": true, 00:11:30.407 "zcopy": true, 00:11:30.407 "get_zone_info": false, 00:11:30.407 "zone_management": false, 00:11:30.407 "zone_append": false, 00:11:30.407 "compare": false, 00:11:30.407 "compare_and_write": false, 00:11:30.407 "abort": true, 00:11:30.407 "seek_hole": false, 00:11:30.407 "seek_data": false, 00:11:30.407 "copy": true, 00:11:30.407 "nvme_iov_md": false 00:11:30.407 }, 00:11:30.407 "memory_domains": [ 00:11:30.407 { 00:11:30.407 "dma_device_id": "system", 00:11:30.407 "dma_device_type": 1 00:11:30.407 }, 00:11:30.407 { 00:11:30.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.407 "dma_device_type": 2 00:11:30.407 } 00:11:30.407 ], 00:11:30.407 "driver_specific": {} 00:11:30.407 } 00:11:30.407 ] 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.407 [2024-11-05 11:27:29.445172] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:30.407 [2024-11-05 11:27:29.445281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:30.407 [2024-11-05 11:27:29.445327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.407 [2024-11-05 11:27:29.447391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.407 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.407 "name": "Existed_Raid", 00:11:30.407 "uuid": "e0438536-cce8-4439-a73a-26b91ebac343", 00:11:30.407 "strip_size_kb": 0, 00:11:30.407 "state": "configuring", 00:11:30.407 "raid_level": "raid1", 00:11:30.407 "superblock": true, 00:11:30.407 "num_base_bdevs": 3, 00:11:30.407 "num_base_bdevs_discovered": 2, 00:11:30.407 "num_base_bdevs_operational": 3, 00:11:30.407 "base_bdevs_list": [ 00:11:30.407 { 00:11:30.407 "name": "BaseBdev1", 00:11:30.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.407 "is_configured": false, 00:11:30.407 "data_offset": 0, 00:11:30.407 "data_size": 0 00:11:30.407 }, 00:11:30.407 { 00:11:30.407 "name": "BaseBdev2", 00:11:30.407 "uuid": "7e48ca4a-5f83-4711-9ff1-4c358abf8c7b", 00:11:30.407 "is_configured": true, 00:11:30.407 "data_offset": 2048, 00:11:30.407 "data_size": 63488 00:11:30.407 }, 00:11:30.407 { 00:11:30.407 "name": "BaseBdev3", 00:11:30.408 "uuid": "bbc65024-321c-42cb-927a-e4a61f13d2a1", 00:11:30.408 "is_configured": true, 00:11:30.408 "data_offset": 2048, 00:11:30.408 "data_size": 63488 00:11:30.408 } 00:11:30.408 ] 00:11:30.408 }' 00:11:30.408 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.408 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.667 [2024-11-05 11:27:29.872452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.667 "name": "Existed_Raid", 00:11:30.667 "uuid": "e0438536-cce8-4439-a73a-26b91ebac343", 00:11:30.667 "strip_size_kb": 0, 00:11:30.667 "state": "configuring", 00:11:30.667 "raid_level": "raid1", 00:11:30.667 "superblock": true, 00:11:30.667 "num_base_bdevs": 3, 00:11:30.667 "num_base_bdevs_discovered": 1, 00:11:30.667 "num_base_bdevs_operational": 3, 00:11:30.667 "base_bdevs_list": [ 00:11:30.667 { 00:11:30.667 "name": "BaseBdev1", 00:11:30.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.667 "is_configured": false, 00:11:30.667 "data_offset": 0, 00:11:30.667 "data_size": 0 00:11:30.667 }, 00:11:30.667 { 00:11:30.667 "name": null, 00:11:30.667 "uuid": "7e48ca4a-5f83-4711-9ff1-4c358abf8c7b", 00:11:30.667 "is_configured": false, 00:11:30.667 "data_offset": 0, 00:11:30.667 "data_size": 63488 00:11:30.667 }, 00:11:30.667 { 00:11:30.667 "name": "BaseBdev3", 00:11:30.667 "uuid": "bbc65024-321c-42cb-927a-e4a61f13d2a1", 00:11:30.667 "is_configured": true, 00:11:30.667 "data_offset": 2048, 00:11:30.667 "data_size": 63488 00:11:30.667 } 00:11:30.667 ] 00:11:30.667 }' 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.667 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.236 [2024-11-05 11:27:30.421589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.236 BaseBdev1 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.236 [ 00:11:31.236 { 00:11:31.236 "name": "BaseBdev1", 00:11:31.236 "aliases": [ 00:11:31.236 "01760a8d-276b-4871-83c0-e1c229cbd20f" 00:11:31.236 ], 00:11:31.236 "product_name": "Malloc disk", 00:11:31.236 "block_size": 512, 00:11:31.236 "num_blocks": 65536, 00:11:31.236 "uuid": "01760a8d-276b-4871-83c0-e1c229cbd20f", 00:11:31.236 "assigned_rate_limits": { 00:11:31.236 "rw_ios_per_sec": 0, 00:11:31.236 "rw_mbytes_per_sec": 0, 00:11:31.236 "r_mbytes_per_sec": 0, 00:11:31.236 "w_mbytes_per_sec": 0 00:11:31.236 }, 00:11:31.236 "claimed": true, 00:11:31.236 "claim_type": "exclusive_write", 00:11:31.236 "zoned": false, 00:11:31.236 "supported_io_types": { 00:11:31.236 "read": true, 00:11:31.236 "write": true, 00:11:31.236 "unmap": true, 00:11:31.236 "flush": true, 00:11:31.236 "reset": true, 00:11:31.236 "nvme_admin": false, 00:11:31.236 "nvme_io": false, 00:11:31.236 "nvme_io_md": false, 00:11:31.236 "write_zeroes": true, 00:11:31.236 "zcopy": true, 00:11:31.236 "get_zone_info": false, 00:11:31.236 "zone_management": false, 00:11:31.236 "zone_append": false, 00:11:31.236 "compare": false, 00:11:31.236 "compare_and_write": false, 00:11:31.236 "abort": true, 00:11:31.236 "seek_hole": false, 00:11:31.236 "seek_data": false, 00:11:31.236 "copy": true, 00:11:31.236 "nvme_iov_md": false 00:11:31.236 }, 00:11:31.236 "memory_domains": [ 00:11:31.236 { 00:11:31.236 "dma_device_id": "system", 00:11:31.236 "dma_device_type": 1 00:11:31.236 }, 00:11:31.236 { 00:11:31.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.236 "dma_device_type": 2 00:11:31.236 } 00:11:31.236 ], 00:11:31.236 "driver_specific": {} 00:11:31.236 } 00:11:31.236 ] 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.236 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.237 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.237 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.237 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.237 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.495 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.495 "name": "Existed_Raid", 00:11:31.495 "uuid": "e0438536-cce8-4439-a73a-26b91ebac343", 00:11:31.495 "strip_size_kb": 0, 00:11:31.495 "state": "configuring", 00:11:31.495 "raid_level": "raid1", 00:11:31.495 "superblock": true, 00:11:31.495 "num_base_bdevs": 3, 00:11:31.495 "num_base_bdevs_discovered": 2, 00:11:31.495 "num_base_bdevs_operational": 3, 00:11:31.495 "base_bdevs_list": [ 00:11:31.495 { 00:11:31.495 "name": "BaseBdev1", 00:11:31.495 "uuid": "01760a8d-276b-4871-83c0-e1c229cbd20f", 00:11:31.495 "is_configured": true, 00:11:31.495 "data_offset": 2048, 00:11:31.495 "data_size": 63488 00:11:31.495 }, 00:11:31.495 { 00:11:31.495 "name": null, 00:11:31.496 "uuid": "7e48ca4a-5f83-4711-9ff1-4c358abf8c7b", 00:11:31.496 "is_configured": false, 00:11:31.496 "data_offset": 0, 00:11:31.496 "data_size": 63488 00:11:31.496 }, 00:11:31.496 { 00:11:31.496 "name": "BaseBdev3", 00:11:31.496 "uuid": "bbc65024-321c-42cb-927a-e4a61f13d2a1", 00:11:31.496 "is_configured": true, 00:11:31.496 "data_offset": 2048, 00:11:31.496 "data_size": 63488 00:11:31.496 } 00:11:31.496 ] 00:11:31.496 }' 00:11:31.496 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.496 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.754 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.754 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:31.754 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.754 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.754 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.754 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.755 [2024-11-05 11:27:30.920784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.755 "name": "Existed_Raid", 00:11:31.755 "uuid": "e0438536-cce8-4439-a73a-26b91ebac343", 00:11:31.755 "strip_size_kb": 0, 00:11:31.755 "state": "configuring", 00:11:31.755 "raid_level": "raid1", 00:11:31.755 "superblock": true, 00:11:31.755 "num_base_bdevs": 3, 00:11:31.755 "num_base_bdevs_discovered": 1, 00:11:31.755 "num_base_bdevs_operational": 3, 00:11:31.755 "base_bdevs_list": [ 00:11:31.755 { 00:11:31.755 "name": "BaseBdev1", 00:11:31.755 "uuid": "01760a8d-276b-4871-83c0-e1c229cbd20f", 00:11:31.755 "is_configured": true, 00:11:31.755 "data_offset": 2048, 00:11:31.755 "data_size": 63488 00:11:31.755 }, 00:11:31.755 { 00:11:31.755 "name": null, 00:11:31.755 "uuid": "7e48ca4a-5f83-4711-9ff1-4c358abf8c7b", 00:11:31.755 "is_configured": false, 00:11:31.755 "data_offset": 0, 00:11:31.755 "data_size": 63488 00:11:31.755 }, 00:11:31.755 { 00:11:31.755 "name": null, 00:11:31.755 "uuid": "bbc65024-321c-42cb-927a-e4a61f13d2a1", 00:11:31.755 "is_configured": false, 00:11:31.755 "data_offset": 0, 00:11:31.755 "data_size": 63488 00:11:31.755 } 00:11:31.755 ] 00:11:31.755 }' 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.755 11:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.334 [2024-11-05 11:27:31.427941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.334 "name": "Existed_Raid", 00:11:32.334 "uuid": "e0438536-cce8-4439-a73a-26b91ebac343", 00:11:32.334 "strip_size_kb": 0, 00:11:32.334 "state": "configuring", 00:11:32.334 "raid_level": "raid1", 00:11:32.334 "superblock": true, 00:11:32.334 "num_base_bdevs": 3, 00:11:32.334 "num_base_bdevs_discovered": 2, 00:11:32.334 "num_base_bdevs_operational": 3, 00:11:32.334 "base_bdevs_list": [ 00:11:32.334 { 00:11:32.334 "name": "BaseBdev1", 00:11:32.334 "uuid": "01760a8d-276b-4871-83c0-e1c229cbd20f", 00:11:32.334 "is_configured": true, 00:11:32.334 "data_offset": 2048, 00:11:32.334 "data_size": 63488 00:11:32.334 }, 00:11:32.334 { 00:11:32.334 "name": null, 00:11:32.334 "uuid": "7e48ca4a-5f83-4711-9ff1-4c358abf8c7b", 00:11:32.334 "is_configured": false, 00:11:32.334 "data_offset": 0, 00:11:32.334 "data_size": 63488 00:11:32.334 }, 00:11:32.334 { 00:11:32.334 "name": "BaseBdev3", 00:11:32.334 "uuid": "bbc65024-321c-42cb-927a-e4a61f13d2a1", 00:11:32.334 "is_configured": true, 00:11:32.334 "data_offset": 2048, 00:11:32.334 "data_size": 63488 00:11:32.334 } 00:11:32.334 ] 00:11:32.334 }' 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.334 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.901 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.901 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.901 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:32.901 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.901 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.901 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:32.901 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:32.901 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.901 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.901 [2024-11-05 11:27:31.923241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.901 "name": "Existed_Raid", 00:11:32.901 "uuid": "e0438536-cce8-4439-a73a-26b91ebac343", 00:11:32.901 "strip_size_kb": 0, 00:11:32.901 "state": "configuring", 00:11:32.901 "raid_level": "raid1", 00:11:32.901 "superblock": true, 00:11:32.901 "num_base_bdevs": 3, 00:11:32.901 "num_base_bdevs_discovered": 1, 00:11:32.901 "num_base_bdevs_operational": 3, 00:11:32.901 "base_bdevs_list": [ 00:11:32.901 { 00:11:32.901 "name": null, 00:11:32.901 "uuid": "01760a8d-276b-4871-83c0-e1c229cbd20f", 00:11:32.901 "is_configured": false, 00:11:32.901 "data_offset": 0, 00:11:32.901 "data_size": 63488 00:11:32.901 }, 00:11:32.901 { 00:11:32.901 "name": null, 00:11:32.901 "uuid": "7e48ca4a-5f83-4711-9ff1-4c358abf8c7b", 00:11:32.901 "is_configured": false, 00:11:32.901 "data_offset": 0, 00:11:32.901 "data_size": 63488 00:11:32.901 }, 00:11:32.901 { 00:11:32.901 "name": "BaseBdev3", 00:11:32.901 "uuid": "bbc65024-321c-42cb-927a-e4a61f13d2a1", 00:11:32.901 "is_configured": true, 00:11:32.901 "data_offset": 2048, 00:11:32.901 "data_size": 63488 00:11:32.901 } 00:11:32.901 ] 00:11:32.901 }' 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.901 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.467 [2024-11-05 11:27:32.564734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.467 "name": "Existed_Raid", 00:11:33.467 "uuid": "e0438536-cce8-4439-a73a-26b91ebac343", 00:11:33.467 "strip_size_kb": 0, 00:11:33.467 "state": "configuring", 00:11:33.467 "raid_level": "raid1", 00:11:33.467 "superblock": true, 00:11:33.467 "num_base_bdevs": 3, 00:11:33.467 "num_base_bdevs_discovered": 2, 00:11:33.467 "num_base_bdevs_operational": 3, 00:11:33.467 "base_bdevs_list": [ 00:11:33.467 { 00:11:33.467 "name": null, 00:11:33.467 "uuid": "01760a8d-276b-4871-83c0-e1c229cbd20f", 00:11:33.467 "is_configured": false, 00:11:33.467 "data_offset": 0, 00:11:33.467 "data_size": 63488 00:11:33.467 }, 00:11:33.467 { 00:11:33.467 "name": "BaseBdev2", 00:11:33.467 "uuid": "7e48ca4a-5f83-4711-9ff1-4c358abf8c7b", 00:11:33.467 "is_configured": true, 00:11:33.467 "data_offset": 2048, 00:11:33.467 "data_size": 63488 00:11:33.467 }, 00:11:33.467 { 00:11:33.467 "name": "BaseBdev3", 00:11:33.467 "uuid": "bbc65024-321c-42cb-927a-e4a61f13d2a1", 00:11:33.467 "is_configured": true, 00:11:33.467 "data_offset": 2048, 00:11:33.467 "data_size": 63488 00:11:33.467 } 00:11:33.467 ] 00:11:33.467 }' 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.467 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 01760a8d-276b-4871-83c0-e1c229cbd20f 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.034 [2024-11-05 11:27:33.147201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:34.034 [2024-11-05 11:27:33.147540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:34.034 [2024-11-05 11:27:33.147594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:34.034 [2024-11-05 11:27:33.147893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:34.034 NewBaseBdev 00:11:34.034 [2024-11-05 11:27:33.148106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:34.034 [2024-11-05 11:27:33.148123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:34.034 [2024-11-05 11:27:33.148292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.034 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.034 [ 00:11:34.034 { 00:11:34.034 "name": "NewBaseBdev", 00:11:34.034 "aliases": [ 00:11:34.034 "01760a8d-276b-4871-83c0-e1c229cbd20f" 00:11:34.034 ], 00:11:34.034 "product_name": "Malloc disk", 00:11:34.034 "block_size": 512, 00:11:34.034 "num_blocks": 65536, 00:11:34.034 "uuid": "01760a8d-276b-4871-83c0-e1c229cbd20f", 00:11:34.034 "assigned_rate_limits": { 00:11:34.034 "rw_ios_per_sec": 0, 00:11:34.034 "rw_mbytes_per_sec": 0, 00:11:34.034 "r_mbytes_per_sec": 0, 00:11:34.034 "w_mbytes_per_sec": 0 00:11:34.034 }, 00:11:34.034 "claimed": true, 00:11:34.034 "claim_type": "exclusive_write", 00:11:34.034 "zoned": false, 00:11:34.034 "supported_io_types": { 00:11:34.034 "read": true, 00:11:34.034 "write": true, 00:11:34.034 "unmap": true, 00:11:34.034 "flush": true, 00:11:34.034 "reset": true, 00:11:34.034 "nvme_admin": false, 00:11:34.034 "nvme_io": false, 00:11:34.034 "nvme_io_md": false, 00:11:34.034 "write_zeroes": true, 00:11:34.034 "zcopy": true, 00:11:34.034 "get_zone_info": false, 00:11:34.034 "zone_management": false, 00:11:34.034 "zone_append": false, 00:11:34.034 "compare": false, 00:11:34.034 "compare_and_write": false, 00:11:34.034 "abort": true, 00:11:34.034 "seek_hole": false, 00:11:34.035 "seek_data": false, 00:11:34.035 "copy": true, 00:11:34.035 "nvme_iov_md": false 00:11:34.035 }, 00:11:34.035 "memory_domains": [ 00:11:34.035 { 00:11:34.035 "dma_device_id": "system", 00:11:34.035 "dma_device_type": 1 00:11:34.035 }, 00:11:34.035 { 00:11:34.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.035 "dma_device_type": 2 00:11:34.035 } 00:11:34.035 ], 00:11:34.035 "driver_specific": {} 00:11:34.035 } 00:11:34.035 ] 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.035 "name": "Existed_Raid", 00:11:34.035 "uuid": "e0438536-cce8-4439-a73a-26b91ebac343", 00:11:34.035 "strip_size_kb": 0, 00:11:34.035 "state": "online", 00:11:34.035 "raid_level": "raid1", 00:11:34.035 "superblock": true, 00:11:34.035 "num_base_bdevs": 3, 00:11:34.035 "num_base_bdevs_discovered": 3, 00:11:34.035 "num_base_bdevs_operational": 3, 00:11:34.035 "base_bdevs_list": [ 00:11:34.035 { 00:11:34.035 "name": "NewBaseBdev", 00:11:34.035 "uuid": "01760a8d-276b-4871-83c0-e1c229cbd20f", 00:11:34.035 "is_configured": true, 00:11:34.035 "data_offset": 2048, 00:11:34.035 "data_size": 63488 00:11:34.035 }, 00:11:34.035 { 00:11:34.035 "name": "BaseBdev2", 00:11:34.035 "uuid": "7e48ca4a-5f83-4711-9ff1-4c358abf8c7b", 00:11:34.035 "is_configured": true, 00:11:34.035 "data_offset": 2048, 00:11:34.035 "data_size": 63488 00:11:34.035 }, 00:11:34.035 { 00:11:34.035 "name": "BaseBdev3", 00:11:34.035 "uuid": "bbc65024-321c-42cb-927a-e4a61f13d2a1", 00:11:34.035 "is_configured": true, 00:11:34.035 "data_offset": 2048, 00:11:34.035 "data_size": 63488 00:11:34.035 } 00:11:34.035 ] 00:11:34.035 }' 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.035 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.603 [2024-11-05 11:27:33.598853] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:34.603 "name": "Existed_Raid", 00:11:34.603 "aliases": [ 00:11:34.603 "e0438536-cce8-4439-a73a-26b91ebac343" 00:11:34.603 ], 00:11:34.603 "product_name": "Raid Volume", 00:11:34.603 "block_size": 512, 00:11:34.603 "num_blocks": 63488, 00:11:34.603 "uuid": "e0438536-cce8-4439-a73a-26b91ebac343", 00:11:34.603 "assigned_rate_limits": { 00:11:34.603 "rw_ios_per_sec": 0, 00:11:34.603 "rw_mbytes_per_sec": 0, 00:11:34.603 "r_mbytes_per_sec": 0, 00:11:34.603 "w_mbytes_per_sec": 0 00:11:34.603 }, 00:11:34.603 "claimed": false, 00:11:34.603 "zoned": false, 00:11:34.603 "supported_io_types": { 00:11:34.603 "read": true, 00:11:34.603 "write": true, 00:11:34.603 "unmap": false, 00:11:34.603 "flush": false, 00:11:34.603 "reset": true, 00:11:34.603 "nvme_admin": false, 00:11:34.603 "nvme_io": false, 00:11:34.603 "nvme_io_md": false, 00:11:34.603 "write_zeroes": true, 00:11:34.603 "zcopy": false, 00:11:34.603 "get_zone_info": false, 00:11:34.603 "zone_management": false, 00:11:34.603 "zone_append": false, 00:11:34.603 "compare": false, 00:11:34.603 "compare_and_write": false, 00:11:34.603 "abort": false, 00:11:34.603 "seek_hole": false, 00:11:34.603 "seek_data": false, 00:11:34.603 "copy": false, 00:11:34.603 "nvme_iov_md": false 00:11:34.603 }, 00:11:34.603 "memory_domains": [ 00:11:34.603 { 00:11:34.603 "dma_device_id": "system", 00:11:34.603 "dma_device_type": 1 00:11:34.603 }, 00:11:34.603 { 00:11:34.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.603 "dma_device_type": 2 00:11:34.603 }, 00:11:34.603 { 00:11:34.603 "dma_device_id": "system", 00:11:34.603 "dma_device_type": 1 00:11:34.603 }, 00:11:34.603 { 00:11:34.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.603 "dma_device_type": 2 00:11:34.603 }, 00:11:34.603 { 00:11:34.603 "dma_device_id": "system", 00:11:34.603 "dma_device_type": 1 00:11:34.603 }, 00:11:34.603 { 00:11:34.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.603 "dma_device_type": 2 00:11:34.603 } 00:11:34.603 ], 00:11:34.603 "driver_specific": { 00:11:34.603 "raid": { 00:11:34.603 "uuid": "e0438536-cce8-4439-a73a-26b91ebac343", 00:11:34.603 "strip_size_kb": 0, 00:11:34.603 "state": "online", 00:11:34.603 "raid_level": "raid1", 00:11:34.603 "superblock": true, 00:11:34.603 "num_base_bdevs": 3, 00:11:34.603 "num_base_bdevs_discovered": 3, 00:11:34.603 "num_base_bdevs_operational": 3, 00:11:34.603 "base_bdevs_list": [ 00:11:34.603 { 00:11:34.603 "name": "NewBaseBdev", 00:11:34.603 "uuid": "01760a8d-276b-4871-83c0-e1c229cbd20f", 00:11:34.603 "is_configured": true, 00:11:34.603 "data_offset": 2048, 00:11:34.603 "data_size": 63488 00:11:34.603 }, 00:11:34.603 { 00:11:34.603 "name": "BaseBdev2", 00:11:34.603 "uuid": "7e48ca4a-5f83-4711-9ff1-4c358abf8c7b", 00:11:34.603 "is_configured": true, 00:11:34.603 "data_offset": 2048, 00:11:34.603 "data_size": 63488 00:11:34.603 }, 00:11:34.603 { 00:11:34.603 "name": "BaseBdev3", 00:11:34.603 "uuid": "bbc65024-321c-42cb-927a-e4a61f13d2a1", 00:11:34.603 "is_configured": true, 00:11:34.603 "data_offset": 2048, 00:11:34.603 "data_size": 63488 00:11:34.603 } 00:11:34.603 ] 00:11:34.603 } 00:11:34.603 } 00:11:34.603 }' 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:34.603 BaseBdev2 00:11:34.603 BaseBdev3' 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.603 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.604 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.604 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.604 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.604 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.604 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.604 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.604 [2024-11-05 11:27:33.862030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.604 [2024-11-05 11:27:33.862110] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:34.604 [2024-11-05 11:27:33.862234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.604 [2024-11-05 11:27:33.862553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.604 [2024-11-05 11:27:33.862609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:34.604 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.604 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68155 00:11:34.604 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 68155 ']' 00:11:34.604 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 68155 00:11:34.604 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:34.604 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:34.862 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68155 00:11:34.862 killing process with pid 68155 00:11:34.862 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:34.862 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:34.862 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68155' 00:11:34.862 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 68155 00:11:34.862 [2024-11-05 11:27:33.910121] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:34.862 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 68155 00:11:35.120 [2024-11-05 11:27:34.226552] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:36.496 11:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:36.496 ************************************ 00:11:36.496 END TEST raid_state_function_test_sb 00:11:36.496 ************************************ 00:11:36.496 00:11:36.496 real 0m10.904s 00:11:36.496 user 0m17.330s 00:11:36.496 sys 0m1.951s 00:11:36.496 11:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:36.496 11:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.496 11:27:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:36.496 11:27:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:36.496 11:27:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:36.496 11:27:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:36.496 ************************************ 00:11:36.496 START TEST raid_superblock_test 00:11:36.496 ************************************ 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68781 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68781 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 68781 ']' 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:36.496 11:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.496 [2024-11-05 11:27:35.539919] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:11:36.496 [2024-11-05 11:27:35.540082] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68781 ] 00:11:36.496 [2024-11-05 11:27:35.718771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.756 [2024-11-05 11:27:35.833853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.014 [2024-11-05 11:27:36.041956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.014 [2024-11-05 11:27:36.042002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.280 malloc1 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.280 [2024-11-05 11:27:36.462495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:37.280 [2024-11-05 11:27:36.462655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.280 [2024-11-05 11:27:36.462702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:37.280 [2024-11-05 11:27:36.462731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.280 [2024-11-05 11:27:36.464938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.280 [2024-11-05 11:27:36.465019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:37.280 pt1 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.280 malloc2 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.280 [2024-11-05 11:27:36.523048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:37.280 [2024-11-05 11:27:36.523199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.280 [2024-11-05 11:27:36.523246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:37.280 [2024-11-05 11:27:36.523280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.280 [2024-11-05 11:27:36.525462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.280 [2024-11-05 11:27:36.525537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:37.280 pt2 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.280 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.549 malloc3 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.549 [2024-11-05 11:27:36.591262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:37.549 [2024-11-05 11:27:36.591316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.549 [2024-11-05 11:27:36.591352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:37.549 [2024-11-05 11:27:36.591361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.549 [2024-11-05 11:27:36.593491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.549 [2024-11-05 11:27:36.593527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:37.549 pt3 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.549 [2024-11-05 11:27:36.603302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:37.549 [2024-11-05 11:27:36.605231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:37.549 [2024-11-05 11:27:36.605297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:37.549 [2024-11-05 11:27:36.605454] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:37.549 [2024-11-05 11:27:36.605472] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:37.549 [2024-11-05 11:27:36.605714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:37.549 [2024-11-05 11:27:36.605883] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:37.549 [2024-11-05 11:27:36.605896] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:37.549 [2024-11-05 11:27:36.606044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.549 "name": "raid_bdev1", 00:11:37.549 "uuid": "65230504-7298-4f85-88e6-96a996eda5b5", 00:11:37.549 "strip_size_kb": 0, 00:11:37.549 "state": "online", 00:11:37.549 "raid_level": "raid1", 00:11:37.549 "superblock": true, 00:11:37.549 "num_base_bdevs": 3, 00:11:37.549 "num_base_bdevs_discovered": 3, 00:11:37.549 "num_base_bdevs_operational": 3, 00:11:37.549 "base_bdevs_list": [ 00:11:37.549 { 00:11:37.549 "name": "pt1", 00:11:37.549 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:37.549 "is_configured": true, 00:11:37.549 "data_offset": 2048, 00:11:37.549 "data_size": 63488 00:11:37.549 }, 00:11:37.549 { 00:11:37.549 "name": "pt2", 00:11:37.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.549 "is_configured": true, 00:11:37.549 "data_offset": 2048, 00:11:37.549 "data_size": 63488 00:11:37.549 }, 00:11:37.549 { 00:11:37.549 "name": "pt3", 00:11:37.549 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.549 "is_configured": true, 00:11:37.549 "data_offset": 2048, 00:11:37.549 "data_size": 63488 00:11:37.549 } 00:11:37.549 ] 00:11:37.549 }' 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.549 11:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.809 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:37.809 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:37.809 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:37.809 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:37.809 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:37.809 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:37.809 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:37.809 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:37.809 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.809 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.809 [2024-11-05 11:27:37.010976] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.809 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.809 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:37.809 "name": "raid_bdev1", 00:11:37.809 "aliases": [ 00:11:37.809 "65230504-7298-4f85-88e6-96a996eda5b5" 00:11:37.809 ], 00:11:37.809 "product_name": "Raid Volume", 00:11:37.809 "block_size": 512, 00:11:37.809 "num_blocks": 63488, 00:11:37.809 "uuid": "65230504-7298-4f85-88e6-96a996eda5b5", 00:11:37.809 "assigned_rate_limits": { 00:11:37.809 "rw_ios_per_sec": 0, 00:11:37.809 "rw_mbytes_per_sec": 0, 00:11:37.809 "r_mbytes_per_sec": 0, 00:11:37.809 "w_mbytes_per_sec": 0 00:11:37.809 }, 00:11:37.809 "claimed": false, 00:11:37.809 "zoned": false, 00:11:37.809 "supported_io_types": { 00:11:37.809 "read": true, 00:11:37.809 "write": true, 00:11:37.809 "unmap": false, 00:11:37.809 "flush": false, 00:11:37.809 "reset": true, 00:11:37.809 "nvme_admin": false, 00:11:37.809 "nvme_io": false, 00:11:37.809 "nvme_io_md": false, 00:11:37.809 "write_zeroes": true, 00:11:37.809 "zcopy": false, 00:11:37.809 "get_zone_info": false, 00:11:37.809 "zone_management": false, 00:11:37.809 "zone_append": false, 00:11:37.809 "compare": false, 00:11:37.809 "compare_and_write": false, 00:11:37.809 "abort": false, 00:11:37.809 "seek_hole": false, 00:11:37.809 "seek_data": false, 00:11:37.809 "copy": false, 00:11:37.809 "nvme_iov_md": false 00:11:37.809 }, 00:11:37.809 "memory_domains": [ 00:11:37.809 { 00:11:37.809 "dma_device_id": "system", 00:11:37.809 "dma_device_type": 1 00:11:37.809 }, 00:11:37.809 { 00:11:37.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.809 "dma_device_type": 2 00:11:37.809 }, 00:11:37.809 { 00:11:37.809 "dma_device_id": "system", 00:11:37.809 "dma_device_type": 1 00:11:37.809 }, 00:11:37.809 { 00:11:37.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.809 "dma_device_type": 2 00:11:37.809 }, 00:11:37.809 { 00:11:37.809 "dma_device_id": "system", 00:11:37.809 "dma_device_type": 1 00:11:37.809 }, 00:11:37.809 { 00:11:37.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.809 "dma_device_type": 2 00:11:37.809 } 00:11:37.809 ], 00:11:37.809 "driver_specific": { 00:11:37.809 "raid": { 00:11:37.809 "uuid": "65230504-7298-4f85-88e6-96a996eda5b5", 00:11:37.809 "strip_size_kb": 0, 00:11:37.809 "state": "online", 00:11:37.809 "raid_level": "raid1", 00:11:37.809 "superblock": true, 00:11:37.809 "num_base_bdevs": 3, 00:11:37.809 "num_base_bdevs_discovered": 3, 00:11:37.809 "num_base_bdevs_operational": 3, 00:11:37.809 "base_bdevs_list": [ 00:11:37.809 { 00:11:37.809 "name": "pt1", 00:11:37.809 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:37.809 "is_configured": true, 00:11:37.809 "data_offset": 2048, 00:11:37.809 "data_size": 63488 00:11:37.809 }, 00:11:37.809 { 00:11:37.809 "name": "pt2", 00:11:37.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.809 "is_configured": true, 00:11:37.809 "data_offset": 2048, 00:11:37.809 "data_size": 63488 00:11:37.809 }, 00:11:37.809 { 00:11:37.809 "name": "pt3", 00:11:37.809 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.809 "is_configured": true, 00:11:37.809 "data_offset": 2048, 00:11:37.809 "data_size": 63488 00:11:37.809 } 00:11:37.809 ] 00:11:37.809 } 00:11:37.809 } 00:11:37.809 }' 00:11:37.809 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:38.068 pt2 00:11:38.068 pt3' 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.068 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.069 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:38.069 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.069 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.069 [2024-11-05 11:27:37.290521] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.069 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.069 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=65230504-7298-4f85-88e6-96a996eda5b5 00:11:38.069 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 65230504-7298-4f85-88e6-96a996eda5b5 ']' 00:11:38.069 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:38.069 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.069 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.069 [2024-11-05 11:27:37.338106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.069 [2024-11-05 11:27:37.338138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.069 [2024-11-05 11:27:37.338311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.069 [2024-11-05 11:27:37.338391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.069 [2024-11-05 11:27:37.338402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:38.069 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:38.328 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.329 [2024-11-05 11:27:37.477854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:38.329 [2024-11-05 11:27:37.479774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:38.329 [2024-11-05 11:27:37.479885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:38.329 [2024-11-05 11:27:37.479954] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:38.329 [2024-11-05 11:27:37.480045] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:38.329 [2024-11-05 11:27:37.480147] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:38.329 [2024-11-05 11:27:37.480207] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.329 [2024-11-05 11:27:37.480245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:38.329 request: 00:11:38.329 { 00:11:38.329 "name": "raid_bdev1", 00:11:38.329 "raid_level": "raid1", 00:11:38.329 "base_bdevs": [ 00:11:38.329 "malloc1", 00:11:38.329 "malloc2", 00:11:38.329 "malloc3" 00:11:38.329 ], 00:11:38.329 "superblock": false, 00:11:38.329 "method": "bdev_raid_create", 00:11:38.329 "req_id": 1 00:11:38.329 } 00:11:38.329 Got JSON-RPC error response 00:11:38.329 response: 00:11:38.329 { 00:11:38.329 "code": -17, 00:11:38.329 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:38.329 } 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.329 [2024-11-05 11:27:37.533710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:38.329 [2024-11-05 11:27:37.533802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.329 [2024-11-05 11:27:37.533859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:38.329 [2024-11-05 11:27:37.533888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.329 [2024-11-05 11:27:37.536039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.329 [2024-11-05 11:27:37.536109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:38.329 [2024-11-05 11:27:37.536219] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:38.329 [2024-11-05 11:27:37.536296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:38.329 pt1 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.329 "name": "raid_bdev1", 00:11:38.329 "uuid": "65230504-7298-4f85-88e6-96a996eda5b5", 00:11:38.329 "strip_size_kb": 0, 00:11:38.329 "state": "configuring", 00:11:38.329 "raid_level": "raid1", 00:11:38.329 "superblock": true, 00:11:38.329 "num_base_bdevs": 3, 00:11:38.329 "num_base_bdevs_discovered": 1, 00:11:38.329 "num_base_bdevs_operational": 3, 00:11:38.329 "base_bdevs_list": [ 00:11:38.329 { 00:11:38.329 "name": "pt1", 00:11:38.329 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.329 "is_configured": true, 00:11:38.329 "data_offset": 2048, 00:11:38.329 "data_size": 63488 00:11:38.329 }, 00:11:38.329 { 00:11:38.329 "name": null, 00:11:38.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.329 "is_configured": false, 00:11:38.329 "data_offset": 2048, 00:11:38.329 "data_size": 63488 00:11:38.329 }, 00:11:38.329 { 00:11:38.329 "name": null, 00:11:38.329 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.329 "is_configured": false, 00:11:38.329 "data_offset": 2048, 00:11:38.329 "data_size": 63488 00:11:38.329 } 00:11:38.329 ] 00:11:38.329 }' 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.329 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 [2024-11-05 11:27:37.941063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:38.905 [2024-11-05 11:27:37.941193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.905 [2024-11-05 11:27:37.941240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:38.905 [2024-11-05 11:27:37.941272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.905 [2024-11-05 11:27:37.941784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.905 [2024-11-05 11:27:37.941846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:38.905 [2024-11-05 11:27:37.941974] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:38.905 [2024-11-05 11:27:37.942031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:38.905 pt2 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 [2024-11-05 11:27:37.953025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 11:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.905 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.905 "name": "raid_bdev1", 00:11:38.905 "uuid": "65230504-7298-4f85-88e6-96a996eda5b5", 00:11:38.905 "strip_size_kb": 0, 00:11:38.905 "state": "configuring", 00:11:38.905 "raid_level": "raid1", 00:11:38.905 "superblock": true, 00:11:38.905 "num_base_bdevs": 3, 00:11:38.905 "num_base_bdevs_discovered": 1, 00:11:38.905 "num_base_bdevs_operational": 3, 00:11:38.905 "base_bdevs_list": [ 00:11:38.905 { 00:11:38.905 "name": "pt1", 00:11:38.905 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.905 "is_configured": true, 00:11:38.905 "data_offset": 2048, 00:11:38.905 "data_size": 63488 00:11:38.905 }, 00:11:38.906 { 00:11:38.906 "name": null, 00:11:38.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.906 "is_configured": false, 00:11:38.906 "data_offset": 0, 00:11:38.906 "data_size": 63488 00:11:38.906 }, 00:11:38.906 { 00:11:38.906 "name": null, 00:11:38.906 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.906 "is_configured": false, 00:11:38.906 "data_offset": 2048, 00:11:38.906 "data_size": 63488 00:11:38.906 } 00:11:38.906 ] 00:11:38.906 }' 00:11:38.906 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.906 11:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.164 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:39.164 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:39.164 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:39.164 11:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.164 11:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.164 [2024-11-05 11:27:38.428263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:39.164 [2024-11-05 11:27:38.428386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.165 [2024-11-05 11:27:38.428458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:39.165 [2024-11-05 11:27:38.428497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.165 [2024-11-05 11:27:38.428988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.165 [2024-11-05 11:27:38.429051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:39.165 [2024-11-05 11:27:38.429180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:39.165 [2024-11-05 11:27:38.429256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:39.165 pt2 00:11:39.165 11:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.165 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:39.165 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:39.165 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:39.165 11:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.165 11:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.424 [2024-11-05 11:27:38.440209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:39.424 [2024-11-05 11:27:38.440291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.424 [2024-11-05 11:27:38.440328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:39.424 [2024-11-05 11:27:38.440360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.424 [2024-11-05 11:27:38.440751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.424 [2024-11-05 11:27:38.440810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:39.424 [2024-11-05 11:27:38.440906] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:39.424 [2024-11-05 11:27:38.440956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:39.424 [2024-11-05 11:27:38.441114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:39.424 [2024-11-05 11:27:38.441169] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:39.424 [2024-11-05 11:27:38.441426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:39.424 [2024-11-05 11:27:38.441624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:39.424 [2024-11-05 11:27:38.441665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:39.424 [2024-11-05 11:27:38.441844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.424 pt3 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.424 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.424 "name": "raid_bdev1", 00:11:39.424 "uuid": "65230504-7298-4f85-88e6-96a996eda5b5", 00:11:39.424 "strip_size_kb": 0, 00:11:39.424 "state": "online", 00:11:39.424 "raid_level": "raid1", 00:11:39.424 "superblock": true, 00:11:39.424 "num_base_bdevs": 3, 00:11:39.424 "num_base_bdevs_discovered": 3, 00:11:39.424 "num_base_bdevs_operational": 3, 00:11:39.424 "base_bdevs_list": [ 00:11:39.424 { 00:11:39.424 "name": "pt1", 00:11:39.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.424 "is_configured": true, 00:11:39.424 "data_offset": 2048, 00:11:39.424 "data_size": 63488 00:11:39.424 }, 00:11:39.424 { 00:11:39.424 "name": "pt2", 00:11:39.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.424 "is_configured": true, 00:11:39.424 "data_offset": 2048, 00:11:39.424 "data_size": 63488 00:11:39.424 }, 00:11:39.424 { 00:11:39.424 "name": "pt3", 00:11:39.425 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.425 "is_configured": true, 00:11:39.425 "data_offset": 2048, 00:11:39.425 "data_size": 63488 00:11:39.425 } 00:11:39.425 ] 00:11:39.425 }' 00:11:39.425 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.425 11:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.684 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:39.684 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:39.684 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:39.684 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:39.684 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:39.684 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:39.684 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:39.684 11:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.684 11:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.684 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:39.684 [2024-11-05 11:27:38.939674] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.684 11:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.943 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:39.943 "name": "raid_bdev1", 00:11:39.943 "aliases": [ 00:11:39.943 "65230504-7298-4f85-88e6-96a996eda5b5" 00:11:39.943 ], 00:11:39.943 "product_name": "Raid Volume", 00:11:39.943 "block_size": 512, 00:11:39.943 "num_blocks": 63488, 00:11:39.943 "uuid": "65230504-7298-4f85-88e6-96a996eda5b5", 00:11:39.943 "assigned_rate_limits": { 00:11:39.943 "rw_ios_per_sec": 0, 00:11:39.943 "rw_mbytes_per_sec": 0, 00:11:39.943 "r_mbytes_per_sec": 0, 00:11:39.943 "w_mbytes_per_sec": 0 00:11:39.943 }, 00:11:39.943 "claimed": false, 00:11:39.943 "zoned": false, 00:11:39.943 "supported_io_types": { 00:11:39.943 "read": true, 00:11:39.943 "write": true, 00:11:39.943 "unmap": false, 00:11:39.943 "flush": false, 00:11:39.943 "reset": true, 00:11:39.943 "nvme_admin": false, 00:11:39.943 "nvme_io": false, 00:11:39.943 "nvme_io_md": false, 00:11:39.943 "write_zeroes": true, 00:11:39.943 "zcopy": false, 00:11:39.943 "get_zone_info": false, 00:11:39.943 "zone_management": false, 00:11:39.943 "zone_append": false, 00:11:39.943 "compare": false, 00:11:39.943 "compare_and_write": false, 00:11:39.943 "abort": false, 00:11:39.943 "seek_hole": false, 00:11:39.943 "seek_data": false, 00:11:39.943 "copy": false, 00:11:39.943 "nvme_iov_md": false 00:11:39.943 }, 00:11:39.943 "memory_domains": [ 00:11:39.943 { 00:11:39.943 "dma_device_id": "system", 00:11:39.943 "dma_device_type": 1 00:11:39.943 }, 00:11:39.943 { 00:11:39.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.943 "dma_device_type": 2 00:11:39.943 }, 00:11:39.943 { 00:11:39.943 "dma_device_id": "system", 00:11:39.943 "dma_device_type": 1 00:11:39.943 }, 00:11:39.943 { 00:11:39.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.943 "dma_device_type": 2 00:11:39.943 }, 00:11:39.943 { 00:11:39.943 "dma_device_id": "system", 00:11:39.943 "dma_device_type": 1 00:11:39.943 }, 00:11:39.943 { 00:11:39.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.943 "dma_device_type": 2 00:11:39.943 } 00:11:39.943 ], 00:11:39.943 "driver_specific": { 00:11:39.943 "raid": { 00:11:39.943 "uuid": "65230504-7298-4f85-88e6-96a996eda5b5", 00:11:39.943 "strip_size_kb": 0, 00:11:39.943 "state": "online", 00:11:39.943 "raid_level": "raid1", 00:11:39.943 "superblock": true, 00:11:39.943 "num_base_bdevs": 3, 00:11:39.943 "num_base_bdevs_discovered": 3, 00:11:39.943 "num_base_bdevs_operational": 3, 00:11:39.943 "base_bdevs_list": [ 00:11:39.943 { 00:11:39.943 "name": "pt1", 00:11:39.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.943 "is_configured": true, 00:11:39.943 "data_offset": 2048, 00:11:39.943 "data_size": 63488 00:11:39.943 }, 00:11:39.943 { 00:11:39.943 "name": "pt2", 00:11:39.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.943 "is_configured": true, 00:11:39.943 "data_offset": 2048, 00:11:39.943 "data_size": 63488 00:11:39.943 }, 00:11:39.943 { 00:11:39.943 "name": "pt3", 00:11:39.943 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.943 "is_configured": true, 00:11:39.943 "data_offset": 2048, 00:11:39.943 "data_size": 63488 00:11:39.943 } 00:11:39.943 ] 00:11:39.943 } 00:11:39.943 } 00:11:39.943 }' 00:11:39.943 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:39.943 pt2 00:11:39.943 pt3' 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.943 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.202 [2024-11-05 11:27:39.219168] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 65230504-7298-4f85-88e6-96a996eda5b5 '!=' 65230504-7298-4f85-88e6-96a996eda5b5 ']' 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.202 [2024-11-05 11:27:39.262837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.202 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.202 "name": "raid_bdev1", 00:11:40.202 "uuid": "65230504-7298-4f85-88e6-96a996eda5b5", 00:11:40.202 "strip_size_kb": 0, 00:11:40.202 "state": "online", 00:11:40.202 "raid_level": "raid1", 00:11:40.202 "superblock": true, 00:11:40.202 "num_base_bdevs": 3, 00:11:40.203 "num_base_bdevs_discovered": 2, 00:11:40.203 "num_base_bdevs_operational": 2, 00:11:40.203 "base_bdevs_list": [ 00:11:40.203 { 00:11:40.203 "name": null, 00:11:40.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.203 "is_configured": false, 00:11:40.203 "data_offset": 0, 00:11:40.203 "data_size": 63488 00:11:40.203 }, 00:11:40.203 { 00:11:40.203 "name": "pt2", 00:11:40.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.203 "is_configured": true, 00:11:40.203 "data_offset": 2048, 00:11:40.203 "data_size": 63488 00:11:40.203 }, 00:11:40.203 { 00:11:40.203 "name": "pt3", 00:11:40.203 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.203 "is_configured": true, 00:11:40.203 "data_offset": 2048, 00:11:40.203 "data_size": 63488 00:11:40.203 } 00:11:40.203 ] 00:11:40.203 }' 00:11:40.203 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.203 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.461 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:40.461 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.461 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.461 [2024-11-05 11:27:39.706048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.461 [2024-11-05 11:27:39.706135] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.461 [2024-11-05 11:27:39.706239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.461 [2024-11-05 11:27:39.706332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.461 [2024-11-05 11:27:39.706385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:40.461 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.461 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.461 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:40.461 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.461 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.461 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.719 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:40.719 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:40.719 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:40.719 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:40.719 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:40.719 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.719 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.720 [2024-11-05 11:27:39.773886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:40.720 [2024-11-05 11:27:39.773941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.720 [2024-11-05 11:27:39.773974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:40.720 [2024-11-05 11:27:39.773984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.720 [2024-11-05 11:27:39.776166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.720 [2024-11-05 11:27:39.776202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:40.720 [2024-11-05 11:27:39.776274] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:40.720 [2024-11-05 11:27:39.776318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:40.720 pt2 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.720 "name": "raid_bdev1", 00:11:40.720 "uuid": "65230504-7298-4f85-88e6-96a996eda5b5", 00:11:40.720 "strip_size_kb": 0, 00:11:40.720 "state": "configuring", 00:11:40.720 "raid_level": "raid1", 00:11:40.720 "superblock": true, 00:11:40.720 "num_base_bdevs": 3, 00:11:40.720 "num_base_bdevs_discovered": 1, 00:11:40.720 "num_base_bdevs_operational": 2, 00:11:40.720 "base_bdevs_list": [ 00:11:40.720 { 00:11:40.720 "name": null, 00:11:40.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.720 "is_configured": false, 00:11:40.720 "data_offset": 2048, 00:11:40.720 "data_size": 63488 00:11:40.720 }, 00:11:40.720 { 00:11:40.720 "name": "pt2", 00:11:40.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.720 "is_configured": true, 00:11:40.720 "data_offset": 2048, 00:11:40.720 "data_size": 63488 00:11:40.720 }, 00:11:40.720 { 00:11:40.720 "name": null, 00:11:40.720 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.720 "is_configured": false, 00:11:40.720 "data_offset": 2048, 00:11:40.720 "data_size": 63488 00:11:40.720 } 00:11:40.720 ] 00:11:40.720 }' 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.720 11:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.978 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.979 [2024-11-05 11:27:40.193214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:40.979 [2024-11-05 11:27:40.193346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.979 [2024-11-05 11:27:40.193371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:40.979 [2024-11-05 11:27:40.193382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.979 [2024-11-05 11:27:40.193843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.979 [2024-11-05 11:27:40.193866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:40.979 [2024-11-05 11:27:40.193963] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:40.979 [2024-11-05 11:27:40.193992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:40.979 [2024-11-05 11:27:40.194110] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:40.979 [2024-11-05 11:27:40.194121] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:40.979 [2024-11-05 11:27:40.194396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:40.979 [2024-11-05 11:27:40.194565] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:40.979 [2024-11-05 11:27:40.194581] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:40.979 [2024-11-05 11:27:40.194714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.979 pt3 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.979 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.237 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.237 "name": "raid_bdev1", 00:11:41.238 "uuid": "65230504-7298-4f85-88e6-96a996eda5b5", 00:11:41.238 "strip_size_kb": 0, 00:11:41.238 "state": "online", 00:11:41.238 "raid_level": "raid1", 00:11:41.238 "superblock": true, 00:11:41.238 "num_base_bdevs": 3, 00:11:41.238 "num_base_bdevs_discovered": 2, 00:11:41.238 "num_base_bdevs_operational": 2, 00:11:41.238 "base_bdevs_list": [ 00:11:41.238 { 00:11:41.238 "name": null, 00:11:41.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.238 "is_configured": false, 00:11:41.238 "data_offset": 2048, 00:11:41.238 "data_size": 63488 00:11:41.238 }, 00:11:41.238 { 00:11:41.238 "name": "pt2", 00:11:41.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.238 "is_configured": true, 00:11:41.238 "data_offset": 2048, 00:11:41.238 "data_size": 63488 00:11:41.238 }, 00:11:41.238 { 00:11:41.238 "name": "pt3", 00:11:41.238 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.238 "is_configured": true, 00:11:41.238 "data_offset": 2048, 00:11:41.238 "data_size": 63488 00:11:41.238 } 00:11:41.238 ] 00:11:41.238 }' 00:11:41.238 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.238 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.497 [2024-11-05 11:27:40.628439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:41.497 [2024-11-05 11:27:40.628513] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.497 [2024-11-05 11:27:40.628605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.497 [2024-11-05 11:27:40.628684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.497 [2024-11-05 11:27:40.628694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.497 [2024-11-05 11:27:40.692339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:41.497 [2024-11-05 11:27:40.692462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.497 [2024-11-05 11:27:40.692500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:41.497 [2024-11-05 11:27:40.692527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.497 [2024-11-05 11:27:40.694730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.497 [2024-11-05 11:27:40.694796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:41.497 [2024-11-05 11:27:40.694958] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:41.497 [2024-11-05 11:27:40.695051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:41.497 [2024-11-05 11:27:40.695235] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:41.497 [2024-11-05 11:27:40.695294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:41.497 [2024-11-05 11:27:40.695353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:41.497 [2024-11-05 11:27:40.695463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:41.497 pt1 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.497 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.497 "name": "raid_bdev1", 00:11:41.497 "uuid": "65230504-7298-4f85-88e6-96a996eda5b5", 00:11:41.497 "strip_size_kb": 0, 00:11:41.497 "state": "configuring", 00:11:41.497 "raid_level": "raid1", 00:11:41.497 "superblock": true, 00:11:41.497 "num_base_bdevs": 3, 00:11:41.497 "num_base_bdevs_discovered": 1, 00:11:41.497 "num_base_bdevs_operational": 2, 00:11:41.497 "base_bdevs_list": [ 00:11:41.497 { 00:11:41.497 "name": null, 00:11:41.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.497 "is_configured": false, 00:11:41.497 "data_offset": 2048, 00:11:41.498 "data_size": 63488 00:11:41.498 }, 00:11:41.498 { 00:11:41.498 "name": "pt2", 00:11:41.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.498 "is_configured": true, 00:11:41.498 "data_offset": 2048, 00:11:41.498 "data_size": 63488 00:11:41.498 }, 00:11:41.498 { 00:11:41.498 "name": null, 00:11:41.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.498 "is_configured": false, 00:11:41.498 "data_offset": 2048, 00:11:41.498 "data_size": 63488 00:11:41.498 } 00:11:41.498 ] 00:11:41.498 }' 00:11:41.498 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.498 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.065 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:42.065 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:42.065 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.065 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.065 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.065 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:42.065 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:42.065 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.065 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.065 [2024-11-05 11:27:41.167545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:42.065 [2024-11-05 11:27:41.167655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.065 [2024-11-05 11:27:41.167693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:42.065 [2024-11-05 11:27:41.167722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.065 [2024-11-05 11:27:41.168210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.065 [2024-11-05 11:27:41.168267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:42.065 [2024-11-05 11:27:41.168381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:42.065 [2024-11-05 11:27:41.168459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:42.066 [2024-11-05 11:27:41.168628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:42.066 [2024-11-05 11:27:41.168665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:42.066 [2024-11-05 11:27:41.168934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:42.066 [2024-11-05 11:27:41.169149] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:42.066 [2024-11-05 11:27:41.169195] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:42.066 [2024-11-05 11:27:41.169385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.066 pt3 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.066 "name": "raid_bdev1", 00:11:42.066 "uuid": "65230504-7298-4f85-88e6-96a996eda5b5", 00:11:42.066 "strip_size_kb": 0, 00:11:42.066 "state": "online", 00:11:42.066 "raid_level": "raid1", 00:11:42.066 "superblock": true, 00:11:42.066 "num_base_bdevs": 3, 00:11:42.066 "num_base_bdevs_discovered": 2, 00:11:42.066 "num_base_bdevs_operational": 2, 00:11:42.066 "base_bdevs_list": [ 00:11:42.066 { 00:11:42.066 "name": null, 00:11:42.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.066 "is_configured": false, 00:11:42.066 "data_offset": 2048, 00:11:42.066 "data_size": 63488 00:11:42.066 }, 00:11:42.066 { 00:11:42.066 "name": "pt2", 00:11:42.066 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.066 "is_configured": true, 00:11:42.066 "data_offset": 2048, 00:11:42.066 "data_size": 63488 00:11:42.066 }, 00:11:42.066 { 00:11:42.066 "name": "pt3", 00:11:42.066 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.066 "is_configured": true, 00:11:42.066 "data_offset": 2048, 00:11:42.066 "data_size": 63488 00:11:42.066 } 00:11:42.066 ] 00:11:42.066 }' 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.066 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.325 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:42.325 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.325 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.325 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:42.325 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.590 [2024-11-05 11:27:41.627050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 65230504-7298-4f85-88e6-96a996eda5b5 '!=' 65230504-7298-4f85-88e6-96a996eda5b5 ']' 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68781 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 68781 ']' 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 68781 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68781 00:11:42.590 killing process with pid 68781 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68781' 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 68781 00:11:42.590 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 68781 00:11:42.590 [2024-11-05 11:27:41.692301] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.590 [2024-11-05 11:27:41.692397] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.590 [2024-11-05 11:27:41.692464] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.590 [2024-11-05 11:27:41.692480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:42.859 [2024-11-05 11:27:41.991170] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.237 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:44.237 00:11:44.237 real 0m7.677s 00:11:44.237 user 0m12.003s 00:11:44.237 sys 0m1.365s 00:11:44.237 ************************************ 00:11:44.237 END TEST raid_superblock_test 00:11:44.237 ************************************ 00:11:44.237 11:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:44.237 11:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.237 11:27:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:11:44.237 11:27:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:44.238 11:27:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:44.238 11:27:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.238 ************************************ 00:11:44.238 START TEST raid_read_error_test 00:11:44.238 ************************************ 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hZJU6w9Rz5 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69227 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69227 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 69227 ']' 00:11:44.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:44.238 11:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.238 [2024-11-05 11:27:43.278835] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:11:44.238 [2024-11-05 11:27:43.278966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69227 ] 00:11:44.238 [2024-11-05 11:27:43.454974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.498 [2024-11-05 11:27:43.566532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.498 [2024-11-05 11:27:43.755713] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.498 [2024-11-05 11:27:43.755751] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.068 BaseBdev1_malloc 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.068 true 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.068 [2024-11-05 11:27:44.163245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:45.068 [2024-11-05 11:27:44.163303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.068 [2024-11-05 11:27:44.163325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:45.068 [2024-11-05 11:27:44.163335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.068 [2024-11-05 11:27:44.165369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.068 [2024-11-05 11:27:44.165409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:45.068 BaseBdev1 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.068 BaseBdev2_malloc 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.068 true 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.068 [2024-11-05 11:27:44.229424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:45.068 [2024-11-05 11:27:44.229501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.068 [2024-11-05 11:27:44.229520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:45.068 [2024-11-05 11:27:44.229531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.068 [2024-11-05 11:27:44.231696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.068 [2024-11-05 11:27:44.231735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:45.068 BaseBdev2 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.068 BaseBdev3_malloc 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.068 true 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.068 [2024-11-05 11:27:44.308305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:45.068 [2024-11-05 11:27:44.308422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.068 [2024-11-05 11:27:44.308446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:45.068 [2024-11-05 11:27:44.308455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.068 [2024-11-05 11:27:44.310489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.068 [2024-11-05 11:27:44.310526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:45.068 BaseBdev3 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.068 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.068 [2024-11-05 11:27:44.320383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.069 [2024-11-05 11:27:44.322185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.069 [2024-11-05 11:27:44.322308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:45.069 [2024-11-05 11:27:44.322519] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:45.069 [2024-11-05 11:27:44.322532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:45.069 [2024-11-05 11:27:44.322779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:45.069 [2024-11-05 11:27:44.322958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:45.069 [2024-11-05 11:27:44.322972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:45.069 [2024-11-05 11:27:44.323171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.069 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.069 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:45.069 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.069 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.069 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.069 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.069 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.069 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.069 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.069 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.069 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.069 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.069 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.069 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.069 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.329 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.329 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.329 "name": "raid_bdev1", 00:11:45.329 "uuid": "b37c7874-188d-4b13-8586-80205c8c5f7a", 00:11:45.329 "strip_size_kb": 0, 00:11:45.329 "state": "online", 00:11:45.329 "raid_level": "raid1", 00:11:45.329 "superblock": true, 00:11:45.329 "num_base_bdevs": 3, 00:11:45.329 "num_base_bdevs_discovered": 3, 00:11:45.329 "num_base_bdevs_operational": 3, 00:11:45.329 "base_bdevs_list": [ 00:11:45.329 { 00:11:45.329 "name": "BaseBdev1", 00:11:45.329 "uuid": "dd4a65ea-e516-529a-8aaa-5d12343991ca", 00:11:45.329 "is_configured": true, 00:11:45.329 "data_offset": 2048, 00:11:45.329 "data_size": 63488 00:11:45.329 }, 00:11:45.329 { 00:11:45.330 "name": "BaseBdev2", 00:11:45.330 "uuid": "05d6dfd7-0bbd-55e1-838d-a502e342da49", 00:11:45.330 "is_configured": true, 00:11:45.330 "data_offset": 2048, 00:11:45.330 "data_size": 63488 00:11:45.330 }, 00:11:45.330 { 00:11:45.330 "name": "BaseBdev3", 00:11:45.330 "uuid": "d2beafa6-c6bd-5c2a-a14a-b443e6ac6bff", 00:11:45.330 "is_configured": true, 00:11:45.330 "data_offset": 2048, 00:11:45.330 "data_size": 63488 00:11:45.330 } 00:11:45.330 ] 00:11:45.330 }' 00:11:45.330 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.330 11:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.590 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:45.590 11:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:45.590 [2024-11-05 11:27:44.836945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.528 11:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.787 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.787 "name": "raid_bdev1", 00:11:46.787 "uuid": "b37c7874-188d-4b13-8586-80205c8c5f7a", 00:11:46.787 "strip_size_kb": 0, 00:11:46.787 "state": "online", 00:11:46.787 "raid_level": "raid1", 00:11:46.787 "superblock": true, 00:11:46.787 "num_base_bdevs": 3, 00:11:46.787 "num_base_bdevs_discovered": 3, 00:11:46.787 "num_base_bdevs_operational": 3, 00:11:46.787 "base_bdevs_list": [ 00:11:46.787 { 00:11:46.787 "name": "BaseBdev1", 00:11:46.787 "uuid": "dd4a65ea-e516-529a-8aaa-5d12343991ca", 00:11:46.787 "is_configured": true, 00:11:46.787 "data_offset": 2048, 00:11:46.787 "data_size": 63488 00:11:46.787 }, 00:11:46.787 { 00:11:46.787 "name": "BaseBdev2", 00:11:46.787 "uuid": "05d6dfd7-0bbd-55e1-838d-a502e342da49", 00:11:46.787 "is_configured": true, 00:11:46.787 "data_offset": 2048, 00:11:46.787 "data_size": 63488 00:11:46.787 }, 00:11:46.787 { 00:11:46.787 "name": "BaseBdev3", 00:11:46.787 "uuid": "d2beafa6-c6bd-5c2a-a14a-b443e6ac6bff", 00:11:46.787 "is_configured": true, 00:11:46.787 "data_offset": 2048, 00:11:46.787 "data_size": 63488 00:11:46.787 } 00:11:46.787 ] 00:11:46.787 }' 00:11:46.787 11:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.787 11:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.047 11:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:47.047 11:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.047 11:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.047 [2024-11-05 11:27:46.214182] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.047 [2024-11-05 11:27:46.214270] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.047 [2024-11-05 11:27:46.216942] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.047 [2024-11-05 11:27:46.217042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.047 [2024-11-05 11:27:46.217186] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.047 [2024-11-05 11:27:46.217235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:47.047 { 00:11:47.047 "results": [ 00:11:47.047 { 00:11:47.047 "job": "raid_bdev1", 00:11:47.047 "core_mask": "0x1", 00:11:47.047 "workload": "randrw", 00:11:47.047 "percentage": 50, 00:11:47.047 "status": "finished", 00:11:47.047 "queue_depth": 1, 00:11:47.047 "io_size": 131072, 00:11:47.047 "runtime": 1.378189, 00:11:47.047 "iops": 13084.562422135135, 00:11:47.047 "mibps": 1635.570302766892, 00:11:47.047 "io_failed": 0, 00:11:47.047 "io_timeout": 0, 00:11:47.047 "avg_latency_us": 73.74000378248805, 00:11:47.047 "min_latency_us": 22.46986899563319, 00:11:47.047 "max_latency_us": 1445.2262008733624 00:11:47.047 } 00:11:47.047 ], 00:11:47.047 "core_count": 1 00:11:47.047 } 00:11:47.047 11:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.047 11:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69227 00:11:47.047 11:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 69227 ']' 00:11:47.047 11:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 69227 00:11:47.047 11:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:47.047 11:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:47.047 11:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69227 00:11:47.047 11:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:47.047 11:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:47.047 11:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69227' 00:11:47.047 killing process with pid 69227 00:11:47.047 11:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 69227 00:11:47.047 [2024-11-05 11:27:46.265871] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.047 11:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 69227 00:11:47.305 [2024-11-05 11:27:46.494712] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:48.713 11:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:48.713 11:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hZJU6w9Rz5 00:11:48.713 11:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:48.713 11:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:48.713 11:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:48.713 11:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.713 11:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:48.713 ************************************ 00:11:48.713 END TEST raid_read_error_test 00:11:48.713 ************************************ 00:11:48.713 11:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:48.713 00:11:48.713 real 0m4.477s 00:11:48.713 user 0m5.301s 00:11:48.713 sys 0m0.565s 00:11:48.713 11:27:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:48.713 11:27:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.713 11:27:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:48.713 11:27:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:48.713 11:27:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:48.713 11:27:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:48.713 ************************************ 00:11:48.713 START TEST raid_write_error_test 00:11:48.713 ************************************ 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UyEgqHAi62 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69367 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69367 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69367 ']' 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:48.713 11:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.713 [2024-11-05 11:27:47.824075] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:11:48.713 [2024-11-05 11:27:47.824217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69367 ] 00:11:48.973 [2024-11-05 11:27:47.997035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.973 [2024-11-05 11:27:48.109064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.233 [2024-11-05 11:27:48.307137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.233 [2024-11-05 11:27:48.307188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.493 BaseBdev1_malloc 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.493 true 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.493 [2024-11-05 11:27:48.714992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:49.493 [2024-11-05 11:27:48.715068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.493 [2024-11-05 11:27:48.715090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:49.493 [2024-11-05 11:27:48.715101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.493 [2024-11-05 11:27:48.717278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.493 [2024-11-05 11:27:48.717317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:49.493 BaseBdev1 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.493 BaseBdev2_malloc 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.493 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.752 true 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.752 [2024-11-05 11:27:48.782675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:49.752 [2024-11-05 11:27:48.782729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.752 [2024-11-05 11:27:48.782761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:49.752 [2024-11-05 11:27:48.782771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.752 [2024-11-05 11:27:48.784875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.752 [2024-11-05 11:27:48.784996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:49.752 BaseBdev2 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.752 BaseBdev3_malloc 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.752 true 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.752 [2024-11-05 11:27:48.860993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:49.752 [2024-11-05 11:27:48.861100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.752 [2024-11-05 11:27:48.861122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:49.752 [2024-11-05 11:27:48.861141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.752 [2024-11-05 11:27:48.863250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.752 [2024-11-05 11:27:48.863320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:49.752 BaseBdev3 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.752 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.752 [2024-11-05 11:27:48.873033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:49.752 [2024-11-05 11:27:48.874820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.752 [2024-11-05 11:27:48.874889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.752 [2024-11-05 11:27:48.875111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:49.753 [2024-11-05 11:27:48.875123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:49.753 [2024-11-05 11:27:48.875380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:49.753 [2024-11-05 11:27:48.875571] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:49.753 [2024-11-05 11:27:48.875584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:49.753 [2024-11-05 11:27:48.875733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.753 "name": "raid_bdev1", 00:11:49.753 "uuid": "f9e36f2a-878c-4a66-9ada-38dd9dbd1859", 00:11:49.753 "strip_size_kb": 0, 00:11:49.753 "state": "online", 00:11:49.753 "raid_level": "raid1", 00:11:49.753 "superblock": true, 00:11:49.753 "num_base_bdevs": 3, 00:11:49.753 "num_base_bdevs_discovered": 3, 00:11:49.753 "num_base_bdevs_operational": 3, 00:11:49.753 "base_bdevs_list": [ 00:11:49.753 { 00:11:49.753 "name": "BaseBdev1", 00:11:49.753 "uuid": "d293279e-5adb-588a-8601-a24101506645", 00:11:49.753 "is_configured": true, 00:11:49.753 "data_offset": 2048, 00:11:49.753 "data_size": 63488 00:11:49.753 }, 00:11:49.753 { 00:11:49.753 "name": "BaseBdev2", 00:11:49.753 "uuid": "171477db-f322-57c6-8639-b80b1dedc3b7", 00:11:49.753 "is_configured": true, 00:11:49.753 "data_offset": 2048, 00:11:49.753 "data_size": 63488 00:11:49.753 }, 00:11:49.753 { 00:11:49.753 "name": "BaseBdev3", 00:11:49.753 "uuid": "0851cb9c-79d5-51d1-bf60-cf53e61e7fcd", 00:11:49.753 "is_configured": true, 00:11:49.753 "data_offset": 2048, 00:11:49.753 "data_size": 63488 00:11:49.753 } 00:11:49.753 ] 00:11:49.753 }' 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.753 11:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.012 11:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:50.012 11:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:50.272 [2024-11-05 11:27:49.317644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:51.209 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:51.209 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.209 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.209 [2024-11-05 11:27:50.228714] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:51.209 [2024-11-05 11:27:50.228855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:51.209 [2024-11-05 11:27:50.229082] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:11:51.209 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.209 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:51.209 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:51.209 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:51.209 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:51.209 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:51.209 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.210 "name": "raid_bdev1", 00:11:51.210 "uuid": "f9e36f2a-878c-4a66-9ada-38dd9dbd1859", 00:11:51.210 "strip_size_kb": 0, 00:11:51.210 "state": "online", 00:11:51.210 "raid_level": "raid1", 00:11:51.210 "superblock": true, 00:11:51.210 "num_base_bdevs": 3, 00:11:51.210 "num_base_bdevs_discovered": 2, 00:11:51.210 "num_base_bdevs_operational": 2, 00:11:51.210 "base_bdevs_list": [ 00:11:51.210 { 00:11:51.210 "name": null, 00:11:51.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.210 "is_configured": false, 00:11:51.210 "data_offset": 0, 00:11:51.210 "data_size": 63488 00:11:51.210 }, 00:11:51.210 { 00:11:51.210 "name": "BaseBdev2", 00:11:51.210 "uuid": "171477db-f322-57c6-8639-b80b1dedc3b7", 00:11:51.210 "is_configured": true, 00:11:51.210 "data_offset": 2048, 00:11:51.210 "data_size": 63488 00:11:51.210 }, 00:11:51.210 { 00:11:51.210 "name": "BaseBdev3", 00:11:51.210 "uuid": "0851cb9c-79d5-51d1-bf60-cf53e61e7fcd", 00:11:51.210 "is_configured": true, 00:11:51.210 "data_offset": 2048, 00:11:51.210 "data_size": 63488 00:11:51.210 } 00:11:51.210 ] 00:11:51.210 }' 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.210 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.469 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:51.469 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.469 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.470 [2024-11-05 11:27:50.663071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:51.470 [2024-11-05 11:27:50.663187] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.470 [2024-11-05 11:27:50.665920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.470 [2024-11-05 11:27:50.666030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.470 [2024-11-05 11:27:50.666132] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.470 [2024-11-05 11:27:50.666199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:51.470 { 00:11:51.470 "results": [ 00:11:51.470 { 00:11:51.470 "job": "raid_bdev1", 00:11:51.470 "core_mask": "0x1", 00:11:51.470 "workload": "randrw", 00:11:51.470 "percentage": 50, 00:11:51.470 "status": "finished", 00:11:51.470 "queue_depth": 1, 00:11:51.470 "io_size": 131072, 00:11:51.470 "runtime": 1.346247, 00:11:51.470 "iops": 14805.604023630136, 00:11:51.470 "mibps": 1850.700502953767, 00:11:51.470 "io_failed": 0, 00:11:51.470 "io_timeout": 0, 00:11:51.470 "avg_latency_us": 64.88228991672122, 00:11:51.470 "min_latency_us": 21.575545851528386, 00:11:51.470 "max_latency_us": 1402.2986899563318 00:11:51.470 } 00:11:51.470 ], 00:11:51.470 "core_count": 1 00:11:51.470 } 00:11:51.470 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.470 11:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69367 00:11:51.470 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69367 ']' 00:11:51.470 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69367 00:11:51.470 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:51.470 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:51.470 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69367 00:11:51.470 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:51.470 killing process with pid 69367 00:11:51.470 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:51.470 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69367' 00:11:51.470 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69367 00:11:51.470 [2024-11-05 11:27:50.709964] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:51.470 11:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69367 00:11:51.730 [2024-11-05 11:27:50.937735] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:53.113 11:27:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UyEgqHAi62 00:11:53.113 11:27:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:53.113 11:27:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:53.113 11:27:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:53.113 11:27:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:53.113 11:27:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:53.113 11:27:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:53.113 11:27:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:53.113 00:11:53.113 real 0m4.345s 00:11:53.113 user 0m5.120s 00:11:53.113 sys 0m0.509s 00:11:53.113 ************************************ 00:11:53.113 END TEST raid_write_error_test 00:11:53.113 ************************************ 00:11:53.113 11:27:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:53.113 11:27:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.113 11:27:52 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:53.113 11:27:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:53.113 11:27:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:53.113 11:27:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:53.113 11:27:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:53.113 11:27:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:53.113 ************************************ 00:11:53.113 START TEST raid_state_function_test 00:11:53.113 ************************************ 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69505 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69505' 00:11:53.113 Process raid pid: 69505 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69505 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69505 ']' 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:53.113 11:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.113 [2024-11-05 11:27:52.258382] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:11:53.113 [2024-11-05 11:27:52.258664] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.374 [2024-11-05 11:27:52.453524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.374 [2024-11-05 11:27:52.567593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.634 [2024-11-05 11:27:52.778995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.634 [2024-11-05 11:27:52.779030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.893 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:53.893 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:53.893 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:53.893 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.893 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.893 [2024-11-05 11:27:53.113882] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:53.893 [2024-11-05 11:27:53.113939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:53.893 [2024-11-05 11:27:53.113950] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:53.893 [2024-11-05 11:27:53.113976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:53.893 [2024-11-05 11:27:53.113982] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:53.894 [2024-11-05 11:27:53.113991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:53.894 [2024-11-05 11:27:53.113997] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:53.894 [2024-11-05 11:27:53.114006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.894 "name": "Existed_Raid", 00:11:53.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.894 "strip_size_kb": 64, 00:11:53.894 "state": "configuring", 00:11:53.894 "raid_level": "raid0", 00:11:53.894 "superblock": false, 00:11:53.894 "num_base_bdevs": 4, 00:11:53.894 "num_base_bdevs_discovered": 0, 00:11:53.894 "num_base_bdevs_operational": 4, 00:11:53.894 "base_bdevs_list": [ 00:11:53.894 { 00:11:53.894 "name": "BaseBdev1", 00:11:53.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.894 "is_configured": false, 00:11:53.894 "data_offset": 0, 00:11:53.894 "data_size": 0 00:11:53.894 }, 00:11:53.894 { 00:11:53.894 "name": "BaseBdev2", 00:11:53.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.894 "is_configured": false, 00:11:53.894 "data_offset": 0, 00:11:53.894 "data_size": 0 00:11:53.894 }, 00:11:53.894 { 00:11:53.894 "name": "BaseBdev3", 00:11:53.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.894 "is_configured": false, 00:11:53.894 "data_offset": 0, 00:11:53.894 "data_size": 0 00:11:53.894 }, 00:11:53.894 { 00:11:53.894 "name": "BaseBdev4", 00:11:53.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.894 "is_configured": false, 00:11:53.894 "data_offset": 0, 00:11:53.894 "data_size": 0 00:11:53.894 } 00:11:53.894 ] 00:11:53.894 }' 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.894 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.464 [2024-11-05 11:27:53.593103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:54.464 [2024-11-05 11:27:53.593205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.464 [2024-11-05 11:27:53.600969] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:54.464 [2024-11-05 11:27:53.601047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:54.464 [2024-11-05 11:27:53.601075] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:54.464 [2024-11-05 11:27:53.601097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:54.464 [2024-11-05 11:27:53.601114] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:54.464 [2024-11-05 11:27:53.601142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:54.464 [2024-11-05 11:27:53.601176] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:54.464 [2024-11-05 11:27:53.601198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.464 [2024-11-05 11:27:53.643274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.464 BaseBdev1 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.464 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.465 [ 00:11:54.465 { 00:11:54.465 "name": "BaseBdev1", 00:11:54.465 "aliases": [ 00:11:54.465 "4ff1d1c8-8781-4577-a113-b88a55b398e0" 00:11:54.465 ], 00:11:54.465 "product_name": "Malloc disk", 00:11:54.465 "block_size": 512, 00:11:54.465 "num_blocks": 65536, 00:11:54.465 "uuid": "4ff1d1c8-8781-4577-a113-b88a55b398e0", 00:11:54.465 "assigned_rate_limits": { 00:11:54.465 "rw_ios_per_sec": 0, 00:11:54.465 "rw_mbytes_per_sec": 0, 00:11:54.465 "r_mbytes_per_sec": 0, 00:11:54.465 "w_mbytes_per_sec": 0 00:11:54.465 }, 00:11:54.465 "claimed": true, 00:11:54.465 "claim_type": "exclusive_write", 00:11:54.465 "zoned": false, 00:11:54.465 "supported_io_types": { 00:11:54.465 "read": true, 00:11:54.465 "write": true, 00:11:54.465 "unmap": true, 00:11:54.465 "flush": true, 00:11:54.465 "reset": true, 00:11:54.465 "nvme_admin": false, 00:11:54.465 "nvme_io": false, 00:11:54.465 "nvme_io_md": false, 00:11:54.465 "write_zeroes": true, 00:11:54.465 "zcopy": true, 00:11:54.465 "get_zone_info": false, 00:11:54.465 "zone_management": false, 00:11:54.465 "zone_append": false, 00:11:54.465 "compare": false, 00:11:54.465 "compare_and_write": false, 00:11:54.465 "abort": true, 00:11:54.465 "seek_hole": false, 00:11:54.465 "seek_data": false, 00:11:54.465 "copy": true, 00:11:54.465 "nvme_iov_md": false 00:11:54.465 }, 00:11:54.465 "memory_domains": [ 00:11:54.465 { 00:11:54.465 "dma_device_id": "system", 00:11:54.465 "dma_device_type": 1 00:11:54.465 }, 00:11:54.465 { 00:11:54.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.465 "dma_device_type": 2 00:11:54.465 } 00:11:54.465 ], 00:11:54.465 "driver_specific": {} 00:11:54.465 } 00:11:54.465 ] 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.465 "name": "Existed_Raid", 00:11:54.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.465 "strip_size_kb": 64, 00:11:54.465 "state": "configuring", 00:11:54.465 "raid_level": "raid0", 00:11:54.465 "superblock": false, 00:11:54.465 "num_base_bdevs": 4, 00:11:54.465 "num_base_bdevs_discovered": 1, 00:11:54.465 "num_base_bdevs_operational": 4, 00:11:54.465 "base_bdevs_list": [ 00:11:54.465 { 00:11:54.465 "name": "BaseBdev1", 00:11:54.465 "uuid": "4ff1d1c8-8781-4577-a113-b88a55b398e0", 00:11:54.465 "is_configured": true, 00:11:54.465 "data_offset": 0, 00:11:54.465 "data_size": 65536 00:11:54.465 }, 00:11:54.465 { 00:11:54.465 "name": "BaseBdev2", 00:11:54.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.465 "is_configured": false, 00:11:54.465 "data_offset": 0, 00:11:54.465 "data_size": 0 00:11:54.465 }, 00:11:54.465 { 00:11:54.465 "name": "BaseBdev3", 00:11:54.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.465 "is_configured": false, 00:11:54.465 "data_offset": 0, 00:11:54.465 "data_size": 0 00:11:54.465 }, 00:11:54.465 { 00:11:54.465 "name": "BaseBdev4", 00:11:54.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.465 "is_configured": false, 00:11:54.465 "data_offset": 0, 00:11:54.465 "data_size": 0 00:11:54.465 } 00:11:54.465 ] 00:11:54.465 }' 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.465 11:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.034 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.035 [2024-11-05 11:27:54.134562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.035 [2024-11-05 11:27:54.134617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.035 [2024-11-05 11:27:54.146573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.035 [2024-11-05 11:27:54.148551] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.035 [2024-11-05 11:27:54.148595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.035 [2024-11-05 11:27:54.148606] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:55.035 [2024-11-05 11:27:54.148618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:55.035 [2024-11-05 11:27:54.148625] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:55.035 [2024-11-05 11:27:54.148633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.035 "name": "Existed_Raid", 00:11:55.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.035 "strip_size_kb": 64, 00:11:55.035 "state": "configuring", 00:11:55.035 "raid_level": "raid0", 00:11:55.035 "superblock": false, 00:11:55.035 "num_base_bdevs": 4, 00:11:55.035 "num_base_bdevs_discovered": 1, 00:11:55.035 "num_base_bdevs_operational": 4, 00:11:55.035 "base_bdevs_list": [ 00:11:55.035 { 00:11:55.035 "name": "BaseBdev1", 00:11:55.035 "uuid": "4ff1d1c8-8781-4577-a113-b88a55b398e0", 00:11:55.035 "is_configured": true, 00:11:55.035 "data_offset": 0, 00:11:55.035 "data_size": 65536 00:11:55.035 }, 00:11:55.035 { 00:11:55.035 "name": "BaseBdev2", 00:11:55.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.035 "is_configured": false, 00:11:55.035 "data_offset": 0, 00:11:55.035 "data_size": 0 00:11:55.035 }, 00:11:55.035 { 00:11:55.035 "name": "BaseBdev3", 00:11:55.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.035 "is_configured": false, 00:11:55.035 "data_offset": 0, 00:11:55.035 "data_size": 0 00:11:55.035 }, 00:11:55.035 { 00:11:55.035 "name": "BaseBdev4", 00:11:55.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.035 "is_configured": false, 00:11:55.035 "data_offset": 0, 00:11:55.035 "data_size": 0 00:11:55.035 } 00:11:55.035 ] 00:11:55.035 }' 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.035 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.295 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:55.295 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.295 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.555 [2024-11-05 11:27:54.586664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.555 BaseBdev2 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.555 [ 00:11:55.555 { 00:11:55.555 "name": "BaseBdev2", 00:11:55.555 "aliases": [ 00:11:55.555 "635fe64d-f9f0-4eaf-9dfb-fd0d2dd911bd" 00:11:55.555 ], 00:11:55.555 "product_name": "Malloc disk", 00:11:55.555 "block_size": 512, 00:11:55.555 "num_blocks": 65536, 00:11:55.555 "uuid": "635fe64d-f9f0-4eaf-9dfb-fd0d2dd911bd", 00:11:55.555 "assigned_rate_limits": { 00:11:55.555 "rw_ios_per_sec": 0, 00:11:55.555 "rw_mbytes_per_sec": 0, 00:11:55.555 "r_mbytes_per_sec": 0, 00:11:55.555 "w_mbytes_per_sec": 0 00:11:55.555 }, 00:11:55.555 "claimed": true, 00:11:55.555 "claim_type": "exclusive_write", 00:11:55.555 "zoned": false, 00:11:55.555 "supported_io_types": { 00:11:55.555 "read": true, 00:11:55.555 "write": true, 00:11:55.555 "unmap": true, 00:11:55.555 "flush": true, 00:11:55.555 "reset": true, 00:11:55.555 "nvme_admin": false, 00:11:55.555 "nvme_io": false, 00:11:55.555 "nvme_io_md": false, 00:11:55.555 "write_zeroes": true, 00:11:55.555 "zcopy": true, 00:11:55.555 "get_zone_info": false, 00:11:55.555 "zone_management": false, 00:11:55.555 "zone_append": false, 00:11:55.555 "compare": false, 00:11:55.555 "compare_and_write": false, 00:11:55.555 "abort": true, 00:11:55.555 "seek_hole": false, 00:11:55.555 "seek_data": false, 00:11:55.555 "copy": true, 00:11:55.555 "nvme_iov_md": false 00:11:55.555 }, 00:11:55.555 "memory_domains": [ 00:11:55.555 { 00:11:55.555 "dma_device_id": "system", 00:11:55.555 "dma_device_type": 1 00:11:55.555 }, 00:11:55.555 { 00:11:55.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.555 "dma_device_type": 2 00:11:55.555 } 00:11:55.555 ], 00:11:55.555 "driver_specific": {} 00:11:55.555 } 00:11:55.555 ] 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.555 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.556 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.556 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.556 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.556 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.556 "name": "Existed_Raid", 00:11:55.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.556 "strip_size_kb": 64, 00:11:55.556 "state": "configuring", 00:11:55.556 "raid_level": "raid0", 00:11:55.556 "superblock": false, 00:11:55.556 "num_base_bdevs": 4, 00:11:55.556 "num_base_bdevs_discovered": 2, 00:11:55.556 "num_base_bdevs_operational": 4, 00:11:55.556 "base_bdevs_list": [ 00:11:55.556 { 00:11:55.556 "name": "BaseBdev1", 00:11:55.556 "uuid": "4ff1d1c8-8781-4577-a113-b88a55b398e0", 00:11:55.556 "is_configured": true, 00:11:55.556 "data_offset": 0, 00:11:55.556 "data_size": 65536 00:11:55.556 }, 00:11:55.556 { 00:11:55.556 "name": "BaseBdev2", 00:11:55.556 "uuid": "635fe64d-f9f0-4eaf-9dfb-fd0d2dd911bd", 00:11:55.556 "is_configured": true, 00:11:55.556 "data_offset": 0, 00:11:55.556 "data_size": 65536 00:11:55.556 }, 00:11:55.556 { 00:11:55.556 "name": "BaseBdev3", 00:11:55.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.556 "is_configured": false, 00:11:55.556 "data_offset": 0, 00:11:55.556 "data_size": 0 00:11:55.556 }, 00:11:55.556 { 00:11:55.556 "name": "BaseBdev4", 00:11:55.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.556 "is_configured": false, 00:11:55.556 "data_offset": 0, 00:11:55.556 "data_size": 0 00:11:55.556 } 00:11:55.556 ] 00:11:55.556 }' 00:11:55.556 11:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.556 11:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.817 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:55.817 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.817 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.078 [2024-11-05 11:27:55.132992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.078 BaseBdev3 00:11:56.078 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.078 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:56.078 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:56.078 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:56.078 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:56.078 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:56.078 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:56.078 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:56.078 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.078 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.078 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.078 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:56.078 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.078 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.078 [ 00:11:56.078 { 00:11:56.078 "name": "BaseBdev3", 00:11:56.078 "aliases": [ 00:11:56.078 "7b3b9fbc-31e2-47a8-af03-02226619a2e8" 00:11:56.078 ], 00:11:56.078 "product_name": "Malloc disk", 00:11:56.078 "block_size": 512, 00:11:56.078 "num_blocks": 65536, 00:11:56.078 "uuid": "7b3b9fbc-31e2-47a8-af03-02226619a2e8", 00:11:56.078 "assigned_rate_limits": { 00:11:56.078 "rw_ios_per_sec": 0, 00:11:56.078 "rw_mbytes_per_sec": 0, 00:11:56.078 "r_mbytes_per_sec": 0, 00:11:56.078 "w_mbytes_per_sec": 0 00:11:56.078 }, 00:11:56.078 "claimed": true, 00:11:56.078 "claim_type": "exclusive_write", 00:11:56.078 "zoned": false, 00:11:56.078 "supported_io_types": { 00:11:56.078 "read": true, 00:11:56.078 "write": true, 00:11:56.078 "unmap": true, 00:11:56.078 "flush": true, 00:11:56.078 "reset": true, 00:11:56.078 "nvme_admin": false, 00:11:56.078 "nvme_io": false, 00:11:56.078 "nvme_io_md": false, 00:11:56.078 "write_zeroes": true, 00:11:56.078 "zcopy": true, 00:11:56.078 "get_zone_info": false, 00:11:56.078 "zone_management": false, 00:11:56.078 "zone_append": false, 00:11:56.078 "compare": false, 00:11:56.078 "compare_and_write": false, 00:11:56.078 "abort": true, 00:11:56.078 "seek_hole": false, 00:11:56.078 "seek_data": false, 00:11:56.078 "copy": true, 00:11:56.078 "nvme_iov_md": false 00:11:56.078 }, 00:11:56.078 "memory_domains": [ 00:11:56.078 { 00:11:56.078 "dma_device_id": "system", 00:11:56.078 "dma_device_type": 1 00:11:56.078 }, 00:11:56.078 { 00:11:56.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.078 "dma_device_type": 2 00:11:56.078 } 00:11:56.078 ], 00:11:56.078 "driver_specific": {} 00:11:56.078 } 00:11:56.078 ] 00:11:56.078 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.079 "name": "Existed_Raid", 00:11:56.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.079 "strip_size_kb": 64, 00:11:56.079 "state": "configuring", 00:11:56.079 "raid_level": "raid0", 00:11:56.079 "superblock": false, 00:11:56.079 "num_base_bdevs": 4, 00:11:56.079 "num_base_bdevs_discovered": 3, 00:11:56.079 "num_base_bdevs_operational": 4, 00:11:56.079 "base_bdevs_list": [ 00:11:56.079 { 00:11:56.079 "name": "BaseBdev1", 00:11:56.079 "uuid": "4ff1d1c8-8781-4577-a113-b88a55b398e0", 00:11:56.079 "is_configured": true, 00:11:56.079 "data_offset": 0, 00:11:56.079 "data_size": 65536 00:11:56.079 }, 00:11:56.079 { 00:11:56.079 "name": "BaseBdev2", 00:11:56.079 "uuid": "635fe64d-f9f0-4eaf-9dfb-fd0d2dd911bd", 00:11:56.079 "is_configured": true, 00:11:56.079 "data_offset": 0, 00:11:56.079 "data_size": 65536 00:11:56.079 }, 00:11:56.079 { 00:11:56.079 "name": "BaseBdev3", 00:11:56.079 "uuid": "7b3b9fbc-31e2-47a8-af03-02226619a2e8", 00:11:56.079 "is_configured": true, 00:11:56.079 "data_offset": 0, 00:11:56.079 "data_size": 65536 00:11:56.079 }, 00:11:56.079 { 00:11:56.079 "name": "BaseBdev4", 00:11:56.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.079 "is_configured": false, 00:11:56.079 "data_offset": 0, 00:11:56.079 "data_size": 0 00:11:56.079 } 00:11:56.079 ] 00:11:56.079 }' 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.079 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.340 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:56.340 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.340 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.601 [2024-11-05 11:27:55.636763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:56.601 [2024-11-05 11:27:55.636879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:56.601 [2024-11-05 11:27:55.636905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:56.601 [2024-11-05 11:27:55.637223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:56.601 [2024-11-05 11:27:55.637426] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:56.601 [2024-11-05 11:27:55.637473] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:56.601 [2024-11-05 11:27:55.637778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.601 BaseBdev4 00:11:56.601 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.601 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:56.601 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:56.601 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:56.601 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:56.601 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:56.601 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:56.601 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:56.601 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.601 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.601 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.601 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:56.601 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.601 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.601 [ 00:11:56.601 { 00:11:56.601 "name": "BaseBdev4", 00:11:56.601 "aliases": [ 00:11:56.601 "9e481c15-1d52-4462-94ed-95d0db548dfc" 00:11:56.601 ], 00:11:56.601 "product_name": "Malloc disk", 00:11:56.602 "block_size": 512, 00:11:56.602 "num_blocks": 65536, 00:11:56.602 "uuid": "9e481c15-1d52-4462-94ed-95d0db548dfc", 00:11:56.602 "assigned_rate_limits": { 00:11:56.602 "rw_ios_per_sec": 0, 00:11:56.602 "rw_mbytes_per_sec": 0, 00:11:56.602 "r_mbytes_per_sec": 0, 00:11:56.602 "w_mbytes_per_sec": 0 00:11:56.602 }, 00:11:56.602 "claimed": true, 00:11:56.602 "claim_type": "exclusive_write", 00:11:56.602 "zoned": false, 00:11:56.602 "supported_io_types": { 00:11:56.602 "read": true, 00:11:56.602 "write": true, 00:11:56.602 "unmap": true, 00:11:56.602 "flush": true, 00:11:56.602 "reset": true, 00:11:56.602 "nvme_admin": false, 00:11:56.602 "nvme_io": false, 00:11:56.602 "nvme_io_md": false, 00:11:56.602 "write_zeroes": true, 00:11:56.602 "zcopy": true, 00:11:56.602 "get_zone_info": false, 00:11:56.602 "zone_management": false, 00:11:56.602 "zone_append": false, 00:11:56.602 "compare": false, 00:11:56.602 "compare_and_write": false, 00:11:56.602 "abort": true, 00:11:56.602 "seek_hole": false, 00:11:56.602 "seek_data": false, 00:11:56.602 "copy": true, 00:11:56.602 "nvme_iov_md": false 00:11:56.602 }, 00:11:56.602 "memory_domains": [ 00:11:56.602 { 00:11:56.602 "dma_device_id": "system", 00:11:56.602 "dma_device_type": 1 00:11:56.602 }, 00:11:56.602 { 00:11:56.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.602 "dma_device_type": 2 00:11:56.602 } 00:11:56.602 ], 00:11:56.602 "driver_specific": {} 00:11:56.602 } 00:11:56.602 ] 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.602 "name": "Existed_Raid", 00:11:56.602 "uuid": "21bf34c7-e02f-48cb-84ea-af6049bff667", 00:11:56.602 "strip_size_kb": 64, 00:11:56.602 "state": "online", 00:11:56.602 "raid_level": "raid0", 00:11:56.602 "superblock": false, 00:11:56.602 "num_base_bdevs": 4, 00:11:56.602 "num_base_bdevs_discovered": 4, 00:11:56.602 "num_base_bdevs_operational": 4, 00:11:56.602 "base_bdevs_list": [ 00:11:56.602 { 00:11:56.602 "name": "BaseBdev1", 00:11:56.602 "uuid": "4ff1d1c8-8781-4577-a113-b88a55b398e0", 00:11:56.602 "is_configured": true, 00:11:56.602 "data_offset": 0, 00:11:56.602 "data_size": 65536 00:11:56.602 }, 00:11:56.602 { 00:11:56.602 "name": "BaseBdev2", 00:11:56.602 "uuid": "635fe64d-f9f0-4eaf-9dfb-fd0d2dd911bd", 00:11:56.602 "is_configured": true, 00:11:56.602 "data_offset": 0, 00:11:56.602 "data_size": 65536 00:11:56.602 }, 00:11:56.602 { 00:11:56.602 "name": "BaseBdev3", 00:11:56.602 "uuid": "7b3b9fbc-31e2-47a8-af03-02226619a2e8", 00:11:56.602 "is_configured": true, 00:11:56.602 "data_offset": 0, 00:11:56.602 "data_size": 65536 00:11:56.602 }, 00:11:56.602 { 00:11:56.602 "name": "BaseBdev4", 00:11:56.602 "uuid": "9e481c15-1d52-4462-94ed-95d0db548dfc", 00:11:56.602 "is_configured": true, 00:11:56.602 "data_offset": 0, 00:11:56.602 "data_size": 65536 00:11:56.602 } 00:11:56.602 ] 00:11:56.602 }' 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.602 11:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.862 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:56.862 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:56.862 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:56.862 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:56.862 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:56.862 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:56.862 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:56.862 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:56.862 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.862 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.123 [2024-11-05 11:27:56.140331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.123 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.123 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:57.123 "name": "Existed_Raid", 00:11:57.123 "aliases": [ 00:11:57.123 "21bf34c7-e02f-48cb-84ea-af6049bff667" 00:11:57.123 ], 00:11:57.123 "product_name": "Raid Volume", 00:11:57.123 "block_size": 512, 00:11:57.123 "num_blocks": 262144, 00:11:57.123 "uuid": "21bf34c7-e02f-48cb-84ea-af6049bff667", 00:11:57.123 "assigned_rate_limits": { 00:11:57.123 "rw_ios_per_sec": 0, 00:11:57.123 "rw_mbytes_per_sec": 0, 00:11:57.123 "r_mbytes_per_sec": 0, 00:11:57.123 "w_mbytes_per_sec": 0 00:11:57.123 }, 00:11:57.123 "claimed": false, 00:11:57.123 "zoned": false, 00:11:57.123 "supported_io_types": { 00:11:57.123 "read": true, 00:11:57.123 "write": true, 00:11:57.123 "unmap": true, 00:11:57.123 "flush": true, 00:11:57.123 "reset": true, 00:11:57.123 "nvme_admin": false, 00:11:57.123 "nvme_io": false, 00:11:57.123 "nvme_io_md": false, 00:11:57.123 "write_zeroes": true, 00:11:57.123 "zcopy": false, 00:11:57.123 "get_zone_info": false, 00:11:57.123 "zone_management": false, 00:11:57.123 "zone_append": false, 00:11:57.123 "compare": false, 00:11:57.123 "compare_and_write": false, 00:11:57.123 "abort": false, 00:11:57.123 "seek_hole": false, 00:11:57.123 "seek_data": false, 00:11:57.123 "copy": false, 00:11:57.123 "nvme_iov_md": false 00:11:57.123 }, 00:11:57.123 "memory_domains": [ 00:11:57.123 { 00:11:57.123 "dma_device_id": "system", 00:11:57.123 "dma_device_type": 1 00:11:57.123 }, 00:11:57.123 { 00:11:57.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.123 "dma_device_type": 2 00:11:57.123 }, 00:11:57.123 { 00:11:57.123 "dma_device_id": "system", 00:11:57.123 "dma_device_type": 1 00:11:57.123 }, 00:11:57.123 { 00:11:57.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.123 "dma_device_type": 2 00:11:57.123 }, 00:11:57.123 { 00:11:57.123 "dma_device_id": "system", 00:11:57.123 "dma_device_type": 1 00:11:57.123 }, 00:11:57.123 { 00:11:57.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.123 "dma_device_type": 2 00:11:57.123 }, 00:11:57.123 { 00:11:57.123 "dma_device_id": "system", 00:11:57.123 "dma_device_type": 1 00:11:57.123 }, 00:11:57.123 { 00:11:57.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.123 "dma_device_type": 2 00:11:57.123 } 00:11:57.123 ], 00:11:57.123 "driver_specific": { 00:11:57.123 "raid": { 00:11:57.123 "uuid": "21bf34c7-e02f-48cb-84ea-af6049bff667", 00:11:57.123 "strip_size_kb": 64, 00:11:57.123 "state": "online", 00:11:57.123 "raid_level": "raid0", 00:11:57.123 "superblock": false, 00:11:57.123 "num_base_bdevs": 4, 00:11:57.123 "num_base_bdevs_discovered": 4, 00:11:57.123 "num_base_bdevs_operational": 4, 00:11:57.123 "base_bdevs_list": [ 00:11:57.123 { 00:11:57.123 "name": "BaseBdev1", 00:11:57.123 "uuid": "4ff1d1c8-8781-4577-a113-b88a55b398e0", 00:11:57.123 "is_configured": true, 00:11:57.123 "data_offset": 0, 00:11:57.123 "data_size": 65536 00:11:57.123 }, 00:11:57.123 { 00:11:57.123 "name": "BaseBdev2", 00:11:57.123 "uuid": "635fe64d-f9f0-4eaf-9dfb-fd0d2dd911bd", 00:11:57.123 "is_configured": true, 00:11:57.123 "data_offset": 0, 00:11:57.123 "data_size": 65536 00:11:57.123 }, 00:11:57.124 { 00:11:57.124 "name": "BaseBdev3", 00:11:57.124 "uuid": "7b3b9fbc-31e2-47a8-af03-02226619a2e8", 00:11:57.124 "is_configured": true, 00:11:57.124 "data_offset": 0, 00:11:57.124 "data_size": 65536 00:11:57.124 }, 00:11:57.124 { 00:11:57.124 "name": "BaseBdev4", 00:11:57.124 "uuid": "9e481c15-1d52-4462-94ed-95d0db548dfc", 00:11:57.124 "is_configured": true, 00:11:57.124 "data_offset": 0, 00:11:57.124 "data_size": 65536 00:11:57.124 } 00:11:57.124 ] 00:11:57.124 } 00:11:57.124 } 00:11:57.124 }' 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:57.124 BaseBdev2 00:11:57.124 BaseBdev3 00:11:57.124 BaseBdev4' 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.124 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.385 [2024-11-05 11:27:56.471459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.385 [2024-11-05 11:27:56.471538] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.385 [2024-11-05 11:27:56.471614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.385 "name": "Existed_Raid", 00:11:57.385 "uuid": "21bf34c7-e02f-48cb-84ea-af6049bff667", 00:11:57.385 "strip_size_kb": 64, 00:11:57.385 "state": "offline", 00:11:57.385 "raid_level": "raid0", 00:11:57.385 "superblock": false, 00:11:57.385 "num_base_bdevs": 4, 00:11:57.385 "num_base_bdevs_discovered": 3, 00:11:57.385 "num_base_bdevs_operational": 3, 00:11:57.385 "base_bdevs_list": [ 00:11:57.385 { 00:11:57.385 "name": null, 00:11:57.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.385 "is_configured": false, 00:11:57.385 "data_offset": 0, 00:11:57.385 "data_size": 65536 00:11:57.385 }, 00:11:57.385 { 00:11:57.385 "name": "BaseBdev2", 00:11:57.385 "uuid": "635fe64d-f9f0-4eaf-9dfb-fd0d2dd911bd", 00:11:57.385 "is_configured": true, 00:11:57.385 "data_offset": 0, 00:11:57.385 "data_size": 65536 00:11:57.385 }, 00:11:57.385 { 00:11:57.385 "name": "BaseBdev3", 00:11:57.385 "uuid": "7b3b9fbc-31e2-47a8-af03-02226619a2e8", 00:11:57.385 "is_configured": true, 00:11:57.385 "data_offset": 0, 00:11:57.385 "data_size": 65536 00:11:57.385 }, 00:11:57.385 { 00:11:57.385 "name": "BaseBdev4", 00:11:57.385 "uuid": "9e481c15-1d52-4462-94ed-95d0db548dfc", 00:11:57.385 "is_configured": true, 00:11:57.385 "data_offset": 0, 00:11:57.385 "data_size": 65536 00:11:57.385 } 00:11:57.385 ] 00:11:57.385 }' 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.385 11:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.956 [2024-11-05 11:27:57.096207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.956 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.217 [2024-11-05 11:27:57.246154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.217 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.217 [2024-11-05 11:27:57.398265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:58.217 [2024-11-05 11:27:57.398314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.478 BaseBdev2 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.478 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.479 [ 00:11:58.479 { 00:11:58.479 "name": "BaseBdev2", 00:11:58.479 "aliases": [ 00:11:58.479 "0f8eeb28-37a0-4171-93eb-699d22048929" 00:11:58.479 ], 00:11:58.479 "product_name": "Malloc disk", 00:11:58.479 "block_size": 512, 00:11:58.479 "num_blocks": 65536, 00:11:58.479 "uuid": "0f8eeb28-37a0-4171-93eb-699d22048929", 00:11:58.479 "assigned_rate_limits": { 00:11:58.479 "rw_ios_per_sec": 0, 00:11:58.479 "rw_mbytes_per_sec": 0, 00:11:58.479 "r_mbytes_per_sec": 0, 00:11:58.479 "w_mbytes_per_sec": 0 00:11:58.479 }, 00:11:58.479 "claimed": false, 00:11:58.479 "zoned": false, 00:11:58.479 "supported_io_types": { 00:11:58.479 "read": true, 00:11:58.479 "write": true, 00:11:58.479 "unmap": true, 00:11:58.479 "flush": true, 00:11:58.479 "reset": true, 00:11:58.479 "nvme_admin": false, 00:11:58.479 "nvme_io": false, 00:11:58.479 "nvme_io_md": false, 00:11:58.479 "write_zeroes": true, 00:11:58.479 "zcopy": true, 00:11:58.479 "get_zone_info": false, 00:11:58.479 "zone_management": false, 00:11:58.479 "zone_append": false, 00:11:58.479 "compare": false, 00:11:58.479 "compare_and_write": false, 00:11:58.479 "abort": true, 00:11:58.479 "seek_hole": false, 00:11:58.479 "seek_data": false, 00:11:58.479 "copy": true, 00:11:58.479 "nvme_iov_md": false 00:11:58.479 }, 00:11:58.479 "memory_domains": [ 00:11:58.479 { 00:11:58.479 "dma_device_id": "system", 00:11:58.479 "dma_device_type": 1 00:11:58.479 }, 00:11:58.479 { 00:11:58.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.479 "dma_device_type": 2 00:11:58.479 } 00:11:58.479 ], 00:11:58.479 "driver_specific": {} 00:11:58.479 } 00:11:58.479 ] 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.479 BaseBdev3 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.479 [ 00:11:58.479 { 00:11:58.479 "name": "BaseBdev3", 00:11:58.479 "aliases": [ 00:11:58.479 "a46cf66c-101e-4983-9e35-c244d881b588" 00:11:58.479 ], 00:11:58.479 "product_name": "Malloc disk", 00:11:58.479 "block_size": 512, 00:11:58.479 "num_blocks": 65536, 00:11:58.479 "uuid": "a46cf66c-101e-4983-9e35-c244d881b588", 00:11:58.479 "assigned_rate_limits": { 00:11:58.479 "rw_ios_per_sec": 0, 00:11:58.479 "rw_mbytes_per_sec": 0, 00:11:58.479 "r_mbytes_per_sec": 0, 00:11:58.479 "w_mbytes_per_sec": 0 00:11:58.479 }, 00:11:58.479 "claimed": false, 00:11:58.479 "zoned": false, 00:11:58.479 "supported_io_types": { 00:11:58.479 "read": true, 00:11:58.479 "write": true, 00:11:58.479 "unmap": true, 00:11:58.479 "flush": true, 00:11:58.479 "reset": true, 00:11:58.479 "nvme_admin": false, 00:11:58.479 "nvme_io": false, 00:11:58.479 "nvme_io_md": false, 00:11:58.479 "write_zeroes": true, 00:11:58.479 "zcopy": true, 00:11:58.479 "get_zone_info": false, 00:11:58.479 "zone_management": false, 00:11:58.479 "zone_append": false, 00:11:58.479 "compare": false, 00:11:58.479 "compare_and_write": false, 00:11:58.479 "abort": true, 00:11:58.479 "seek_hole": false, 00:11:58.479 "seek_data": false, 00:11:58.479 "copy": true, 00:11:58.479 "nvme_iov_md": false 00:11:58.479 }, 00:11:58.479 "memory_domains": [ 00:11:58.479 { 00:11:58.479 "dma_device_id": "system", 00:11:58.479 "dma_device_type": 1 00:11:58.479 }, 00:11:58.479 { 00:11:58.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.479 "dma_device_type": 2 00:11:58.479 } 00:11:58.479 ], 00:11:58.479 "driver_specific": {} 00:11:58.479 } 00:11:58.479 ] 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.479 BaseBdev4 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.479 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.740 [ 00:11:58.740 { 00:11:58.740 "name": "BaseBdev4", 00:11:58.740 "aliases": [ 00:11:58.740 "144856b6-401e-4fe8-af5c-090fe500aaed" 00:11:58.740 ], 00:11:58.740 "product_name": "Malloc disk", 00:11:58.740 "block_size": 512, 00:11:58.740 "num_blocks": 65536, 00:11:58.740 "uuid": "144856b6-401e-4fe8-af5c-090fe500aaed", 00:11:58.740 "assigned_rate_limits": { 00:11:58.740 "rw_ios_per_sec": 0, 00:11:58.740 "rw_mbytes_per_sec": 0, 00:11:58.740 "r_mbytes_per_sec": 0, 00:11:58.740 "w_mbytes_per_sec": 0 00:11:58.740 }, 00:11:58.740 "claimed": false, 00:11:58.740 "zoned": false, 00:11:58.740 "supported_io_types": { 00:11:58.740 "read": true, 00:11:58.740 "write": true, 00:11:58.740 "unmap": true, 00:11:58.740 "flush": true, 00:11:58.740 "reset": true, 00:11:58.740 "nvme_admin": false, 00:11:58.740 "nvme_io": false, 00:11:58.740 "nvme_io_md": false, 00:11:58.740 "write_zeroes": true, 00:11:58.740 "zcopy": true, 00:11:58.740 "get_zone_info": false, 00:11:58.740 "zone_management": false, 00:11:58.740 "zone_append": false, 00:11:58.740 "compare": false, 00:11:58.740 "compare_and_write": false, 00:11:58.740 "abort": true, 00:11:58.740 "seek_hole": false, 00:11:58.740 "seek_data": false, 00:11:58.740 "copy": true, 00:11:58.740 "nvme_iov_md": false 00:11:58.740 }, 00:11:58.740 "memory_domains": [ 00:11:58.740 { 00:11:58.740 "dma_device_id": "system", 00:11:58.740 "dma_device_type": 1 00:11:58.740 }, 00:11:58.740 { 00:11:58.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.740 "dma_device_type": 2 00:11:58.740 } 00:11:58.740 ], 00:11:58.740 "driver_specific": {} 00:11:58.740 } 00:11:58.740 ] 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.740 [2024-11-05 11:27:57.785309] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:58.740 [2024-11-05 11:27:57.785358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:58.740 [2024-11-05 11:27:57.785382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.740 [2024-11-05 11:27:57.787321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.740 [2024-11-05 11:27:57.787380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.740 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.740 "name": "Existed_Raid", 00:11:58.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.741 "strip_size_kb": 64, 00:11:58.741 "state": "configuring", 00:11:58.741 "raid_level": "raid0", 00:11:58.741 "superblock": false, 00:11:58.741 "num_base_bdevs": 4, 00:11:58.741 "num_base_bdevs_discovered": 3, 00:11:58.741 "num_base_bdevs_operational": 4, 00:11:58.741 "base_bdevs_list": [ 00:11:58.741 { 00:11:58.741 "name": "BaseBdev1", 00:11:58.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.741 "is_configured": false, 00:11:58.741 "data_offset": 0, 00:11:58.741 "data_size": 0 00:11:58.741 }, 00:11:58.741 { 00:11:58.741 "name": "BaseBdev2", 00:11:58.741 "uuid": "0f8eeb28-37a0-4171-93eb-699d22048929", 00:11:58.741 "is_configured": true, 00:11:58.741 "data_offset": 0, 00:11:58.741 "data_size": 65536 00:11:58.741 }, 00:11:58.741 { 00:11:58.741 "name": "BaseBdev3", 00:11:58.741 "uuid": "a46cf66c-101e-4983-9e35-c244d881b588", 00:11:58.741 "is_configured": true, 00:11:58.741 "data_offset": 0, 00:11:58.741 "data_size": 65536 00:11:58.741 }, 00:11:58.741 { 00:11:58.741 "name": "BaseBdev4", 00:11:58.741 "uuid": "144856b6-401e-4fe8-af5c-090fe500aaed", 00:11:58.741 "is_configured": true, 00:11:58.741 "data_offset": 0, 00:11:58.741 "data_size": 65536 00:11:58.741 } 00:11:58.741 ] 00:11:58.741 }' 00:11:58.741 11:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.741 11:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.004 [2024-11-05 11:27:58.248542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.004 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.263 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.263 "name": "Existed_Raid", 00:11:59.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.263 "strip_size_kb": 64, 00:11:59.263 "state": "configuring", 00:11:59.263 "raid_level": "raid0", 00:11:59.263 "superblock": false, 00:11:59.263 "num_base_bdevs": 4, 00:11:59.263 "num_base_bdevs_discovered": 2, 00:11:59.263 "num_base_bdevs_operational": 4, 00:11:59.263 "base_bdevs_list": [ 00:11:59.263 { 00:11:59.263 "name": "BaseBdev1", 00:11:59.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.264 "is_configured": false, 00:11:59.264 "data_offset": 0, 00:11:59.264 "data_size": 0 00:11:59.264 }, 00:11:59.264 { 00:11:59.264 "name": null, 00:11:59.264 "uuid": "0f8eeb28-37a0-4171-93eb-699d22048929", 00:11:59.264 "is_configured": false, 00:11:59.264 "data_offset": 0, 00:11:59.264 "data_size": 65536 00:11:59.264 }, 00:11:59.264 { 00:11:59.264 "name": "BaseBdev3", 00:11:59.264 "uuid": "a46cf66c-101e-4983-9e35-c244d881b588", 00:11:59.264 "is_configured": true, 00:11:59.264 "data_offset": 0, 00:11:59.264 "data_size": 65536 00:11:59.264 }, 00:11:59.264 { 00:11:59.264 "name": "BaseBdev4", 00:11:59.264 "uuid": "144856b6-401e-4fe8-af5c-090fe500aaed", 00:11:59.264 "is_configured": true, 00:11:59.264 "data_offset": 0, 00:11:59.264 "data_size": 65536 00:11:59.264 } 00:11:59.264 ] 00:11:59.264 }' 00:11:59.264 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.264 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.524 [2024-11-05 11:27:58.735759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.524 BaseBdev1 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.524 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.524 [ 00:11:59.524 { 00:11:59.524 "name": "BaseBdev1", 00:11:59.524 "aliases": [ 00:11:59.524 "0fead6d3-4f7e-4bab-abf0-e571f22778e8" 00:11:59.524 ], 00:11:59.524 "product_name": "Malloc disk", 00:11:59.524 "block_size": 512, 00:11:59.524 "num_blocks": 65536, 00:11:59.524 "uuid": "0fead6d3-4f7e-4bab-abf0-e571f22778e8", 00:11:59.524 "assigned_rate_limits": { 00:11:59.524 "rw_ios_per_sec": 0, 00:11:59.524 "rw_mbytes_per_sec": 0, 00:11:59.524 "r_mbytes_per_sec": 0, 00:11:59.524 "w_mbytes_per_sec": 0 00:11:59.524 }, 00:11:59.524 "claimed": true, 00:11:59.524 "claim_type": "exclusive_write", 00:11:59.524 "zoned": false, 00:11:59.524 "supported_io_types": { 00:11:59.524 "read": true, 00:11:59.525 "write": true, 00:11:59.525 "unmap": true, 00:11:59.525 "flush": true, 00:11:59.525 "reset": true, 00:11:59.525 "nvme_admin": false, 00:11:59.525 "nvme_io": false, 00:11:59.525 "nvme_io_md": false, 00:11:59.525 "write_zeroes": true, 00:11:59.525 "zcopy": true, 00:11:59.525 "get_zone_info": false, 00:11:59.525 "zone_management": false, 00:11:59.525 "zone_append": false, 00:11:59.525 "compare": false, 00:11:59.525 "compare_and_write": false, 00:11:59.525 "abort": true, 00:11:59.525 "seek_hole": false, 00:11:59.525 "seek_data": false, 00:11:59.525 "copy": true, 00:11:59.525 "nvme_iov_md": false 00:11:59.525 }, 00:11:59.525 "memory_domains": [ 00:11:59.525 { 00:11:59.525 "dma_device_id": "system", 00:11:59.525 "dma_device_type": 1 00:11:59.525 }, 00:11:59.525 { 00:11:59.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.525 "dma_device_type": 2 00:11:59.525 } 00:11:59.525 ], 00:11:59.525 "driver_specific": {} 00:11:59.525 } 00:11:59.525 ] 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.525 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.784 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.784 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.784 "name": "Existed_Raid", 00:11:59.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.784 "strip_size_kb": 64, 00:11:59.784 "state": "configuring", 00:11:59.784 "raid_level": "raid0", 00:11:59.784 "superblock": false, 00:11:59.784 "num_base_bdevs": 4, 00:11:59.784 "num_base_bdevs_discovered": 3, 00:11:59.784 "num_base_bdevs_operational": 4, 00:11:59.784 "base_bdevs_list": [ 00:11:59.784 { 00:11:59.784 "name": "BaseBdev1", 00:11:59.784 "uuid": "0fead6d3-4f7e-4bab-abf0-e571f22778e8", 00:11:59.784 "is_configured": true, 00:11:59.784 "data_offset": 0, 00:11:59.784 "data_size": 65536 00:11:59.784 }, 00:11:59.784 { 00:11:59.784 "name": null, 00:11:59.784 "uuid": "0f8eeb28-37a0-4171-93eb-699d22048929", 00:11:59.784 "is_configured": false, 00:11:59.784 "data_offset": 0, 00:11:59.784 "data_size": 65536 00:11:59.784 }, 00:11:59.784 { 00:11:59.784 "name": "BaseBdev3", 00:11:59.784 "uuid": "a46cf66c-101e-4983-9e35-c244d881b588", 00:11:59.784 "is_configured": true, 00:11:59.784 "data_offset": 0, 00:11:59.784 "data_size": 65536 00:11:59.784 }, 00:11:59.784 { 00:11:59.784 "name": "BaseBdev4", 00:11:59.784 "uuid": "144856b6-401e-4fe8-af5c-090fe500aaed", 00:11:59.784 "is_configured": true, 00:11:59.784 "data_offset": 0, 00:11:59.784 "data_size": 65536 00:11:59.784 } 00:11:59.784 ] 00:11:59.784 }' 00:11:59.784 11:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.784 11:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.044 [2024-11-05 11:27:59.263104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.044 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.045 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.045 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.045 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.045 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.045 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.045 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.045 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.045 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.045 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.045 "name": "Existed_Raid", 00:12:00.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.045 "strip_size_kb": 64, 00:12:00.045 "state": "configuring", 00:12:00.045 "raid_level": "raid0", 00:12:00.045 "superblock": false, 00:12:00.045 "num_base_bdevs": 4, 00:12:00.045 "num_base_bdevs_discovered": 2, 00:12:00.045 "num_base_bdevs_operational": 4, 00:12:00.045 "base_bdevs_list": [ 00:12:00.045 { 00:12:00.045 "name": "BaseBdev1", 00:12:00.045 "uuid": "0fead6d3-4f7e-4bab-abf0-e571f22778e8", 00:12:00.045 "is_configured": true, 00:12:00.045 "data_offset": 0, 00:12:00.045 "data_size": 65536 00:12:00.045 }, 00:12:00.045 { 00:12:00.045 "name": null, 00:12:00.045 "uuid": "0f8eeb28-37a0-4171-93eb-699d22048929", 00:12:00.045 "is_configured": false, 00:12:00.045 "data_offset": 0, 00:12:00.045 "data_size": 65536 00:12:00.045 }, 00:12:00.045 { 00:12:00.045 "name": null, 00:12:00.045 "uuid": "a46cf66c-101e-4983-9e35-c244d881b588", 00:12:00.045 "is_configured": false, 00:12:00.045 "data_offset": 0, 00:12:00.045 "data_size": 65536 00:12:00.045 }, 00:12:00.045 { 00:12:00.045 "name": "BaseBdev4", 00:12:00.045 "uuid": "144856b6-401e-4fe8-af5c-090fe500aaed", 00:12:00.045 "is_configured": true, 00:12:00.045 "data_offset": 0, 00:12:00.045 "data_size": 65536 00:12:00.045 } 00:12:00.045 ] 00:12:00.045 }' 00:12:00.305 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.305 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.566 [2024-11-05 11:27:59.734254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.566 "name": "Existed_Raid", 00:12:00.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.566 "strip_size_kb": 64, 00:12:00.566 "state": "configuring", 00:12:00.566 "raid_level": "raid0", 00:12:00.566 "superblock": false, 00:12:00.566 "num_base_bdevs": 4, 00:12:00.566 "num_base_bdevs_discovered": 3, 00:12:00.566 "num_base_bdevs_operational": 4, 00:12:00.566 "base_bdevs_list": [ 00:12:00.566 { 00:12:00.566 "name": "BaseBdev1", 00:12:00.566 "uuid": "0fead6d3-4f7e-4bab-abf0-e571f22778e8", 00:12:00.566 "is_configured": true, 00:12:00.566 "data_offset": 0, 00:12:00.566 "data_size": 65536 00:12:00.566 }, 00:12:00.566 { 00:12:00.566 "name": null, 00:12:00.566 "uuid": "0f8eeb28-37a0-4171-93eb-699d22048929", 00:12:00.566 "is_configured": false, 00:12:00.566 "data_offset": 0, 00:12:00.566 "data_size": 65536 00:12:00.566 }, 00:12:00.566 { 00:12:00.566 "name": "BaseBdev3", 00:12:00.566 "uuid": "a46cf66c-101e-4983-9e35-c244d881b588", 00:12:00.566 "is_configured": true, 00:12:00.566 "data_offset": 0, 00:12:00.566 "data_size": 65536 00:12:00.566 }, 00:12:00.566 { 00:12:00.566 "name": "BaseBdev4", 00:12:00.566 "uuid": "144856b6-401e-4fe8-af5c-090fe500aaed", 00:12:00.566 "is_configured": true, 00:12:00.566 "data_offset": 0, 00:12:00.566 "data_size": 65536 00:12:00.566 } 00:12:00.566 ] 00:12:00.566 }' 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.566 11:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.136 [2024-11-05 11:28:00.241404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.136 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.136 "name": "Existed_Raid", 00:12:01.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.136 "strip_size_kb": 64, 00:12:01.136 "state": "configuring", 00:12:01.136 "raid_level": "raid0", 00:12:01.136 "superblock": false, 00:12:01.136 "num_base_bdevs": 4, 00:12:01.136 "num_base_bdevs_discovered": 2, 00:12:01.136 "num_base_bdevs_operational": 4, 00:12:01.136 "base_bdevs_list": [ 00:12:01.136 { 00:12:01.136 "name": null, 00:12:01.136 "uuid": "0fead6d3-4f7e-4bab-abf0-e571f22778e8", 00:12:01.136 "is_configured": false, 00:12:01.136 "data_offset": 0, 00:12:01.136 "data_size": 65536 00:12:01.136 }, 00:12:01.136 { 00:12:01.136 "name": null, 00:12:01.136 "uuid": "0f8eeb28-37a0-4171-93eb-699d22048929", 00:12:01.136 "is_configured": false, 00:12:01.136 "data_offset": 0, 00:12:01.136 "data_size": 65536 00:12:01.136 }, 00:12:01.136 { 00:12:01.136 "name": "BaseBdev3", 00:12:01.136 "uuid": "a46cf66c-101e-4983-9e35-c244d881b588", 00:12:01.136 "is_configured": true, 00:12:01.136 "data_offset": 0, 00:12:01.136 "data_size": 65536 00:12:01.136 }, 00:12:01.136 { 00:12:01.136 "name": "BaseBdev4", 00:12:01.136 "uuid": "144856b6-401e-4fe8-af5c-090fe500aaed", 00:12:01.137 "is_configured": true, 00:12:01.137 "data_offset": 0, 00:12:01.137 "data_size": 65536 00:12:01.137 } 00:12:01.137 ] 00:12:01.137 }' 00:12:01.137 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.137 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.706 [2024-11-05 11:28:00.823998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.706 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.706 "name": "Existed_Raid", 00:12:01.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.706 "strip_size_kb": 64, 00:12:01.706 "state": "configuring", 00:12:01.706 "raid_level": "raid0", 00:12:01.706 "superblock": false, 00:12:01.706 "num_base_bdevs": 4, 00:12:01.706 "num_base_bdevs_discovered": 3, 00:12:01.706 "num_base_bdevs_operational": 4, 00:12:01.706 "base_bdevs_list": [ 00:12:01.706 { 00:12:01.706 "name": null, 00:12:01.706 "uuid": "0fead6d3-4f7e-4bab-abf0-e571f22778e8", 00:12:01.706 "is_configured": false, 00:12:01.707 "data_offset": 0, 00:12:01.707 "data_size": 65536 00:12:01.707 }, 00:12:01.707 { 00:12:01.707 "name": "BaseBdev2", 00:12:01.707 "uuid": "0f8eeb28-37a0-4171-93eb-699d22048929", 00:12:01.707 "is_configured": true, 00:12:01.707 "data_offset": 0, 00:12:01.707 "data_size": 65536 00:12:01.707 }, 00:12:01.707 { 00:12:01.707 "name": "BaseBdev3", 00:12:01.707 "uuid": "a46cf66c-101e-4983-9e35-c244d881b588", 00:12:01.707 "is_configured": true, 00:12:01.707 "data_offset": 0, 00:12:01.707 "data_size": 65536 00:12:01.707 }, 00:12:01.707 { 00:12:01.707 "name": "BaseBdev4", 00:12:01.707 "uuid": "144856b6-401e-4fe8-af5c-090fe500aaed", 00:12:01.707 "is_configured": true, 00:12:01.707 "data_offset": 0, 00:12:01.707 "data_size": 65536 00:12:01.707 } 00:12:01.707 ] 00:12:01.707 }' 00:12:01.707 11:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.707 11:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0fead6d3-4f7e-4bab-abf0-e571f22778e8 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.276 [2024-11-05 11:28:01.375060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:02.276 [2024-11-05 11:28:01.375105] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:02.276 [2024-11-05 11:28:01.375113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:02.276 [2024-11-05 11:28:01.375414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:02.276 [2024-11-05 11:28:01.375568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:02.276 [2024-11-05 11:28:01.375587] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:02.276 [2024-11-05 11:28:01.375817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.276 NewBaseBdev 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.276 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.276 [ 00:12:02.276 { 00:12:02.276 "name": "NewBaseBdev", 00:12:02.276 "aliases": [ 00:12:02.276 "0fead6d3-4f7e-4bab-abf0-e571f22778e8" 00:12:02.276 ], 00:12:02.276 "product_name": "Malloc disk", 00:12:02.276 "block_size": 512, 00:12:02.276 "num_blocks": 65536, 00:12:02.276 "uuid": "0fead6d3-4f7e-4bab-abf0-e571f22778e8", 00:12:02.276 "assigned_rate_limits": { 00:12:02.276 "rw_ios_per_sec": 0, 00:12:02.276 "rw_mbytes_per_sec": 0, 00:12:02.276 "r_mbytes_per_sec": 0, 00:12:02.276 "w_mbytes_per_sec": 0 00:12:02.276 }, 00:12:02.276 "claimed": true, 00:12:02.276 "claim_type": "exclusive_write", 00:12:02.276 "zoned": false, 00:12:02.277 "supported_io_types": { 00:12:02.277 "read": true, 00:12:02.277 "write": true, 00:12:02.277 "unmap": true, 00:12:02.277 "flush": true, 00:12:02.277 "reset": true, 00:12:02.277 "nvme_admin": false, 00:12:02.277 "nvme_io": false, 00:12:02.277 "nvme_io_md": false, 00:12:02.277 "write_zeroes": true, 00:12:02.277 "zcopy": true, 00:12:02.277 "get_zone_info": false, 00:12:02.277 "zone_management": false, 00:12:02.277 "zone_append": false, 00:12:02.277 "compare": false, 00:12:02.277 "compare_and_write": false, 00:12:02.277 "abort": true, 00:12:02.277 "seek_hole": false, 00:12:02.277 "seek_data": false, 00:12:02.277 "copy": true, 00:12:02.277 "nvme_iov_md": false 00:12:02.277 }, 00:12:02.277 "memory_domains": [ 00:12:02.277 { 00:12:02.277 "dma_device_id": "system", 00:12:02.277 "dma_device_type": 1 00:12:02.277 }, 00:12:02.277 { 00:12:02.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.277 "dma_device_type": 2 00:12:02.277 } 00:12:02.277 ], 00:12:02.277 "driver_specific": {} 00:12:02.277 } 00:12:02.277 ] 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.277 "name": "Existed_Raid", 00:12:02.277 "uuid": "bf3d4070-2af9-48a9-919a-df3d5b8f9263", 00:12:02.277 "strip_size_kb": 64, 00:12:02.277 "state": "online", 00:12:02.277 "raid_level": "raid0", 00:12:02.277 "superblock": false, 00:12:02.277 "num_base_bdevs": 4, 00:12:02.277 "num_base_bdevs_discovered": 4, 00:12:02.277 "num_base_bdevs_operational": 4, 00:12:02.277 "base_bdevs_list": [ 00:12:02.277 { 00:12:02.277 "name": "NewBaseBdev", 00:12:02.277 "uuid": "0fead6d3-4f7e-4bab-abf0-e571f22778e8", 00:12:02.277 "is_configured": true, 00:12:02.277 "data_offset": 0, 00:12:02.277 "data_size": 65536 00:12:02.277 }, 00:12:02.277 { 00:12:02.277 "name": "BaseBdev2", 00:12:02.277 "uuid": "0f8eeb28-37a0-4171-93eb-699d22048929", 00:12:02.277 "is_configured": true, 00:12:02.277 "data_offset": 0, 00:12:02.277 "data_size": 65536 00:12:02.277 }, 00:12:02.277 { 00:12:02.277 "name": "BaseBdev3", 00:12:02.277 "uuid": "a46cf66c-101e-4983-9e35-c244d881b588", 00:12:02.277 "is_configured": true, 00:12:02.277 "data_offset": 0, 00:12:02.277 "data_size": 65536 00:12:02.277 }, 00:12:02.277 { 00:12:02.277 "name": "BaseBdev4", 00:12:02.277 "uuid": "144856b6-401e-4fe8-af5c-090fe500aaed", 00:12:02.277 "is_configured": true, 00:12:02.277 "data_offset": 0, 00:12:02.277 "data_size": 65536 00:12:02.277 } 00:12:02.277 ] 00:12:02.277 }' 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.277 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.846 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:02.846 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:02.846 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:02.846 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:02.846 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:02.846 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:02.846 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:02.846 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:02.846 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.846 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.846 [2024-11-05 11:28:01.858651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.846 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.846 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:02.846 "name": "Existed_Raid", 00:12:02.846 "aliases": [ 00:12:02.846 "bf3d4070-2af9-48a9-919a-df3d5b8f9263" 00:12:02.846 ], 00:12:02.846 "product_name": "Raid Volume", 00:12:02.846 "block_size": 512, 00:12:02.846 "num_blocks": 262144, 00:12:02.846 "uuid": "bf3d4070-2af9-48a9-919a-df3d5b8f9263", 00:12:02.846 "assigned_rate_limits": { 00:12:02.846 "rw_ios_per_sec": 0, 00:12:02.846 "rw_mbytes_per_sec": 0, 00:12:02.846 "r_mbytes_per_sec": 0, 00:12:02.846 "w_mbytes_per_sec": 0 00:12:02.846 }, 00:12:02.846 "claimed": false, 00:12:02.846 "zoned": false, 00:12:02.846 "supported_io_types": { 00:12:02.846 "read": true, 00:12:02.846 "write": true, 00:12:02.846 "unmap": true, 00:12:02.846 "flush": true, 00:12:02.846 "reset": true, 00:12:02.846 "nvme_admin": false, 00:12:02.846 "nvme_io": false, 00:12:02.846 "nvme_io_md": false, 00:12:02.846 "write_zeroes": true, 00:12:02.846 "zcopy": false, 00:12:02.846 "get_zone_info": false, 00:12:02.847 "zone_management": false, 00:12:02.847 "zone_append": false, 00:12:02.847 "compare": false, 00:12:02.847 "compare_and_write": false, 00:12:02.847 "abort": false, 00:12:02.847 "seek_hole": false, 00:12:02.847 "seek_data": false, 00:12:02.847 "copy": false, 00:12:02.847 "nvme_iov_md": false 00:12:02.847 }, 00:12:02.847 "memory_domains": [ 00:12:02.847 { 00:12:02.847 "dma_device_id": "system", 00:12:02.847 "dma_device_type": 1 00:12:02.847 }, 00:12:02.847 { 00:12:02.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.847 "dma_device_type": 2 00:12:02.847 }, 00:12:02.847 { 00:12:02.847 "dma_device_id": "system", 00:12:02.847 "dma_device_type": 1 00:12:02.847 }, 00:12:02.847 { 00:12:02.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.847 "dma_device_type": 2 00:12:02.847 }, 00:12:02.847 { 00:12:02.847 "dma_device_id": "system", 00:12:02.847 "dma_device_type": 1 00:12:02.847 }, 00:12:02.847 { 00:12:02.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.847 "dma_device_type": 2 00:12:02.847 }, 00:12:02.847 { 00:12:02.847 "dma_device_id": "system", 00:12:02.847 "dma_device_type": 1 00:12:02.847 }, 00:12:02.847 { 00:12:02.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.847 "dma_device_type": 2 00:12:02.847 } 00:12:02.847 ], 00:12:02.847 "driver_specific": { 00:12:02.847 "raid": { 00:12:02.847 "uuid": "bf3d4070-2af9-48a9-919a-df3d5b8f9263", 00:12:02.847 "strip_size_kb": 64, 00:12:02.847 "state": "online", 00:12:02.847 "raid_level": "raid0", 00:12:02.847 "superblock": false, 00:12:02.847 "num_base_bdevs": 4, 00:12:02.847 "num_base_bdevs_discovered": 4, 00:12:02.847 "num_base_bdevs_operational": 4, 00:12:02.847 "base_bdevs_list": [ 00:12:02.847 { 00:12:02.847 "name": "NewBaseBdev", 00:12:02.847 "uuid": "0fead6d3-4f7e-4bab-abf0-e571f22778e8", 00:12:02.847 "is_configured": true, 00:12:02.847 "data_offset": 0, 00:12:02.847 "data_size": 65536 00:12:02.847 }, 00:12:02.847 { 00:12:02.847 "name": "BaseBdev2", 00:12:02.847 "uuid": "0f8eeb28-37a0-4171-93eb-699d22048929", 00:12:02.847 "is_configured": true, 00:12:02.847 "data_offset": 0, 00:12:02.847 "data_size": 65536 00:12:02.847 }, 00:12:02.847 { 00:12:02.847 "name": "BaseBdev3", 00:12:02.847 "uuid": "a46cf66c-101e-4983-9e35-c244d881b588", 00:12:02.847 "is_configured": true, 00:12:02.847 "data_offset": 0, 00:12:02.847 "data_size": 65536 00:12:02.847 }, 00:12:02.847 { 00:12:02.847 "name": "BaseBdev4", 00:12:02.847 "uuid": "144856b6-401e-4fe8-af5c-090fe500aaed", 00:12:02.847 "is_configured": true, 00:12:02.847 "data_offset": 0, 00:12:02.847 "data_size": 65536 00:12:02.847 } 00:12:02.847 ] 00:12:02.847 } 00:12:02.847 } 00:12:02.847 }' 00:12:02.847 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:02.847 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:02.847 BaseBdev2 00:12:02.847 BaseBdev3 00:12:02.847 BaseBdev4' 00:12:02.847 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.847 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:02.847 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.847 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:02.847 11:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.847 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.847 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.847 11:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.847 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.108 [2024-11-05 11:28:02.157757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.108 [2024-11-05 11:28:02.157828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.108 [2024-11-05 11:28:02.157938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.108 [2024-11-05 11:28:02.158022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.108 [2024-11-05 11:28:02.158074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69505 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69505 ']' 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69505 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69505 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69505' 00:12:03.108 killing process with pid 69505 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69505 00:12:03.108 [2024-11-05 11:28:02.205596] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.108 11:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69505 00:12:03.368 [2024-11-05 11:28:02.596894] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:04.748 00:12:04.748 real 0m11.554s 00:12:04.748 user 0m18.449s 00:12:04.748 sys 0m2.078s 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:04.748 ************************************ 00:12:04.748 END TEST raid_state_function_test 00:12:04.748 ************************************ 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.748 11:28:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:04.748 11:28:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:04.748 11:28:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:04.748 11:28:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.748 ************************************ 00:12:04.748 START TEST raid_state_function_test_sb 00:12:04.748 ************************************ 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70182 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:04.748 Process raid pid: 70182 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70182' 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70182 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 70182 ']' 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:04.748 11:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.748 [2024-11-05 11:28:03.855278] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:12:04.748 [2024-11-05 11:28:03.855489] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.008 [2024-11-05 11:28:04.027867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.008 [2024-11-05 11:28:04.140925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.268 [2024-11-05 11:28:04.341048] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.268 [2024-11-05 11:28:04.341084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.528 [2024-11-05 11:28:04.690620] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.528 [2024-11-05 11:28:04.690714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.528 [2024-11-05 11:28:04.690730] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:05.528 [2024-11-05 11:28:04.690739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:05.528 [2024-11-05 11:28:04.690746] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:05.528 [2024-11-05 11:28:04.690755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:05.528 [2024-11-05 11:28:04.690761] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:05.528 [2024-11-05 11:28:04.690770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.528 "name": "Existed_Raid", 00:12:05.528 "uuid": "8bdcb784-6db8-40d0-ab7f-f6eadbdf2358", 00:12:05.528 "strip_size_kb": 64, 00:12:05.528 "state": "configuring", 00:12:05.528 "raid_level": "raid0", 00:12:05.528 "superblock": true, 00:12:05.528 "num_base_bdevs": 4, 00:12:05.528 "num_base_bdevs_discovered": 0, 00:12:05.528 "num_base_bdevs_operational": 4, 00:12:05.528 "base_bdevs_list": [ 00:12:05.528 { 00:12:05.528 "name": "BaseBdev1", 00:12:05.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.528 "is_configured": false, 00:12:05.528 "data_offset": 0, 00:12:05.528 "data_size": 0 00:12:05.528 }, 00:12:05.528 { 00:12:05.528 "name": "BaseBdev2", 00:12:05.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.528 "is_configured": false, 00:12:05.528 "data_offset": 0, 00:12:05.528 "data_size": 0 00:12:05.528 }, 00:12:05.528 { 00:12:05.528 "name": "BaseBdev3", 00:12:05.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.528 "is_configured": false, 00:12:05.528 "data_offset": 0, 00:12:05.528 "data_size": 0 00:12:05.528 }, 00:12:05.528 { 00:12:05.528 "name": "BaseBdev4", 00:12:05.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.528 "is_configured": false, 00:12:05.528 "data_offset": 0, 00:12:05.528 "data_size": 0 00:12:05.528 } 00:12:05.528 ] 00:12:05.528 }' 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.528 11:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.098 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:06.098 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.098 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.098 [2024-11-05 11:28:05.093859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:06.098 [2024-11-05 11:28:05.093963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:06.098 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.099 [2024-11-05 11:28:05.101841] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:06.099 [2024-11-05 11:28:05.101920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:06.099 [2024-11-05 11:28:05.101947] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:06.099 [2024-11-05 11:28:05.101986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:06.099 [2024-11-05 11:28:05.102005] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:06.099 [2024-11-05 11:28:05.102026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:06.099 [2024-11-05 11:28:05.102044] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:06.099 [2024-11-05 11:28:05.102065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.099 [2024-11-05 11:28:05.144830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.099 BaseBdev1 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.099 [ 00:12:06.099 { 00:12:06.099 "name": "BaseBdev1", 00:12:06.099 "aliases": [ 00:12:06.099 "1d9618f7-bbce-4422-8471-2185ec8552e8" 00:12:06.099 ], 00:12:06.099 "product_name": "Malloc disk", 00:12:06.099 "block_size": 512, 00:12:06.099 "num_blocks": 65536, 00:12:06.099 "uuid": "1d9618f7-bbce-4422-8471-2185ec8552e8", 00:12:06.099 "assigned_rate_limits": { 00:12:06.099 "rw_ios_per_sec": 0, 00:12:06.099 "rw_mbytes_per_sec": 0, 00:12:06.099 "r_mbytes_per_sec": 0, 00:12:06.099 "w_mbytes_per_sec": 0 00:12:06.099 }, 00:12:06.099 "claimed": true, 00:12:06.099 "claim_type": "exclusive_write", 00:12:06.099 "zoned": false, 00:12:06.099 "supported_io_types": { 00:12:06.099 "read": true, 00:12:06.099 "write": true, 00:12:06.099 "unmap": true, 00:12:06.099 "flush": true, 00:12:06.099 "reset": true, 00:12:06.099 "nvme_admin": false, 00:12:06.099 "nvme_io": false, 00:12:06.099 "nvme_io_md": false, 00:12:06.099 "write_zeroes": true, 00:12:06.099 "zcopy": true, 00:12:06.099 "get_zone_info": false, 00:12:06.099 "zone_management": false, 00:12:06.099 "zone_append": false, 00:12:06.099 "compare": false, 00:12:06.099 "compare_and_write": false, 00:12:06.099 "abort": true, 00:12:06.099 "seek_hole": false, 00:12:06.099 "seek_data": false, 00:12:06.099 "copy": true, 00:12:06.099 "nvme_iov_md": false 00:12:06.099 }, 00:12:06.099 "memory_domains": [ 00:12:06.099 { 00:12:06.099 "dma_device_id": "system", 00:12:06.099 "dma_device_type": 1 00:12:06.099 }, 00:12:06.099 { 00:12:06.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.099 "dma_device_type": 2 00:12:06.099 } 00:12:06.099 ], 00:12:06.099 "driver_specific": {} 00:12:06.099 } 00:12:06.099 ] 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.099 "name": "Existed_Raid", 00:12:06.099 "uuid": "5f10326f-2acf-4583-abc5-451bf2b18c60", 00:12:06.099 "strip_size_kb": 64, 00:12:06.099 "state": "configuring", 00:12:06.099 "raid_level": "raid0", 00:12:06.099 "superblock": true, 00:12:06.099 "num_base_bdevs": 4, 00:12:06.099 "num_base_bdevs_discovered": 1, 00:12:06.099 "num_base_bdevs_operational": 4, 00:12:06.099 "base_bdevs_list": [ 00:12:06.099 { 00:12:06.099 "name": "BaseBdev1", 00:12:06.099 "uuid": "1d9618f7-bbce-4422-8471-2185ec8552e8", 00:12:06.099 "is_configured": true, 00:12:06.099 "data_offset": 2048, 00:12:06.099 "data_size": 63488 00:12:06.099 }, 00:12:06.099 { 00:12:06.099 "name": "BaseBdev2", 00:12:06.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.099 "is_configured": false, 00:12:06.099 "data_offset": 0, 00:12:06.099 "data_size": 0 00:12:06.099 }, 00:12:06.099 { 00:12:06.099 "name": "BaseBdev3", 00:12:06.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.099 "is_configured": false, 00:12:06.099 "data_offset": 0, 00:12:06.099 "data_size": 0 00:12:06.099 }, 00:12:06.099 { 00:12:06.099 "name": "BaseBdev4", 00:12:06.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.099 "is_configured": false, 00:12:06.099 "data_offset": 0, 00:12:06.099 "data_size": 0 00:12:06.099 } 00:12:06.099 ] 00:12:06.099 }' 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.099 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.359 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:06.359 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.359 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.359 [2024-11-05 11:28:05.620072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:06.359 [2024-11-05 11:28:05.620204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:06.359 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.359 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:06.359 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.359 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.359 [2024-11-05 11:28:05.632105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.618 [2024-11-05 11:28:05.633999] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:06.618 [2024-11-05 11:28:05.634042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:06.618 [2024-11-05 11:28:05.634052] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:06.618 [2024-11-05 11:28:05.634062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:06.618 [2024-11-05 11:28:05.634069] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:06.618 [2024-11-05 11:28:05.634077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:06.618 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.618 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:06.618 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:06.618 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:06.618 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.618 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.618 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:06.618 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.618 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.618 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.618 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.619 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.619 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.619 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.619 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.619 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.619 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.619 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.619 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.619 "name": "Existed_Raid", 00:12:06.619 "uuid": "21cbca5e-cac5-4c59-a907-560129cb8884", 00:12:06.619 "strip_size_kb": 64, 00:12:06.619 "state": "configuring", 00:12:06.619 "raid_level": "raid0", 00:12:06.619 "superblock": true, 00:12:06.619 "num_base_bdevs": 4, 00:12:06.619 "num_base_bdevs_discovered": 1, 00:12:06.619 "num_base_bdevs_operational": 4, 00:12:06.619 "base_bdevs_list": [ 00:12:06.619 { 00:12:06.619 "name": "BaseBdev1", 00:12:06.619 "uuid": "1d9618f7-bbce-4422-8471-2185ec8552e8", 00:12:06.619 "is_configured": true, 00:12:06.619 "data_offset": 2048, 00:12:06.619 "data_size": 63488 00:12:06.619 }, 00:12:06.619 { 00:12:06.619 "name": "BaseBdev2", 00:12:06.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.619 "is_configured": false, 00:12:06.619 "data_offset": 0, 00:12:06.619 "data_size": 0 00:12:06.619 }, 00:12:06.619 { 00:12:06.619 "name": "BaseBdev3", 00:12:06.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.619 "is_configured": false, 00:12:06.619 "data_offset": 0, 00:12:06.619 "data_size": 0 00:12:06.619 }, 00:12:06.619 { 00:12:06.619 "name": "BaseBdev4", 00:12:06.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.619 "is_configured": false, 00:12:06.619 "data_offset": 0, 00:12:06.619 "data_size": 0 00:12:06.619 } 00:12:06.619 ] 00:12:06.619 }' 00:12:06.619 11:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.619 11:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.879 [2024-11-05 11:28:06.129359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.879 BaseBdev2 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.879 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.138 [ 00:12:07.138 { 00:12:07.138 "name": "BaseBdev2", 00:12:07.138 "aliases": [ 00:12:07.138 "4685f27f-33ff-44f4-afb6-6d238b3a5025" 00:12:07.138 ], 00:12:07.138 "product_name": "Malloc disk", 00:12:07.138 "block_size": 512, 00:12:07.138 "num_blocks": 65536, 00:12:07.138 "uuid": "4685f27f-33ff-44f4-afb6-6d238b3a5025", 00:12:07.138 "assigned_rate_limits": { 00:12:07.138 "rw_ios_per_sec": 0, 00:12:07.138 "rw_mbytes_per_sec": 0, 00:12:07.138 "r_mbytes_per_sec": 0, 00:12:07.138 "w_mbytes_per_sec": 0 00:12:07.138 }, 00:12:07.138 "claimed": true, 00:12:07.138 "claim_type": "exclusive_write", 00:12:07.138 "zoned": false, 00:12:07.138 "supported_io_types": { 00:12:07.138 "read": true, 00:12:07.138 "write": true, 00:12:07.138 "unmap": true, 00:12:07.138 "flush": true, 00:12:07.138 "reset": true, 00:12:07.138 "nvme_admin": false, 00:12:07.138 "nvme_io": false, 00:12:07.138 "nvme_io_md": false, 00:12:07.138 "write_zeroes": true, 00:12:07.138 "zcopy": true, 00:12:07.138 "get_zone_info": false, 00:12:07.138 "zone_management": false, 00:12:07.138 "zone_append": false, 00:12:07.138 "compare": false, 00:12:07.138 "compare_and_write": false, 00:12:07.138 "abort": true, 00:12:07.138 "seek_hole": false, 00:12:07.138 "seek_data": false, 00:12:07.138 "copy": true, 00:12:07.138 "nvme_iov_md": false 00:12:07.138 }, 00:12:07.138 "memory_domains": [ 00:12:07.138 { 00:12:07.138 "dma_device_id": "system", 00:12:07.138 "dma_device_type": 1 00:12:07.138 }, 00:12:07.138 { 00:12:07.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.138 "dma_device_type": 2 00:12:07.138 } 00:12:07.138 ], 00:12:07.138 "driver_specific": {} 00:12:07.138 } 00:12:07.138 ] 00:12:07.138 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.138 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.139 "name": "Existed_Raid", 00:12:07.139 "uuid": "21cbca5e-cac5-4c59-a907-560129cb8884", 00:12:07.139 "strip_size_kb": 64, 00:12:07.139 "state": "configuring", 00:12:07.139 "raid_level": "raid0", 00:12:07.139 "superblock": true, 00:12:07.139 "num_base_bdevs": 4, 00:12:07.139 "num_base_bdevs_discovered": 2, 00:12:07.139 "num_base_bdevs_operational": 4, 00:12:07.139 "base_bdevs_list": [ 00:12:07.139 { 00:12:07.139 "name": "BaseBdev1", 00:12:07.139 "uuid": "1d9618f7-bbce-4422-8471-2185ec8552e8", 00:12:07.139 "is_configured": true, 00:12:07.139 "data_offset": 2048, 00:12:07.139 "data_size": 63488 00:12:07.139 }, 00:12:07.139 { 00:12:07.139 "name": "BaseBdev2", 00:12:07.139 "uuid": "4685f27f-33ff-44f4-afb6-6d238b3a5025", 00:12:07.139 "is_configured": true, 00:12:07.139 "data_offset": 2048, 00:12:07.139 "data_size": 63488 00:12:07.139 }, 00:12:07.139 { 00:12:07.139 "name": "BaseBdev3", 00:12:07.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.139 "is_configured": false, 00:12:07.139 "data_offset": 0, 00:12:07.139 "data_size": 0 00:12:07.139 }, 00:12:07.139 { 00:12:07.139 "name": "BaseBdev4", 00:12:07.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.139 "is_configured": false, 00:12:07.139 "data_offset": 0, 00:12:07.139 "data_size": 0 00:12:07.139 } 00:12:07.139 ] 00:12:07.139 }' 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.139 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.398 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:07.398 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.398 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.657 [2024-11-05 11:28:06.672897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.657 BaseBdev3 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.657 [ 00:12:07.657 { 00:12:07.657 "name": "BaseBdev3", 00:12:07.657 "aliases": [ 00:12:07.657 "f3d825f5-2f3a-4f9a-ac8c-005c26177166" 00:12:07.657 ], 00:12:07.657 "product_name": "Malloc disk", 00:12:07.657 "block_size": 512, 00:12:07.657 "num_blocks": 65536, 00:12:07.657 "uuid": "f3d825f5-2f3a-4f9a-ac8c-005c26177166", 00:12:07.657 "assigned_rate_limits": { 00:12:07.657 "rw_ios_per_sec": 0, 00:12:07.657 "rw_mbytes_per_sec": 0, 00:12:07.657 "r_mbytes_per_sec": 0, 00:12:07.657 "w_mbytes_per_sec": 0 00:12:07.657 }, 00:12:07.657 "claimed": true, 00:12:07.657 "claim_type": "exclusive_write", 00:12:07.657 "zoned": false, 00:12:07.657 "supported_io_types": { 00:12:07.657 "read": true, 00:12:07.657 "write": true, 00:12:07.657 "unmap": true, 00:12:07.657 "flush": true, 00:12:07.657 "reset": true, 00:12:07.657 "nvme_admin": false, 00:12:07.657 "nvme_io": false, 00:12:07.657 "nvme_io_md": false, 00:12:07.657 "write_zeroes": true, 00:12:07.657 "zcopy": true, 00:12:07.657 "get_zone_info": false, 00:12:07.657 "zone_management": false, 00:12:07.657 "zone_append": false, 00:12:07.657 "compare": false, 00:12:07.657 "compare_and_write": false, 00:12:07.657 "abort": true, 00:12:07.657 "seek_hole": false, 00:12:07.657 "seek_data": false, 00:12:07.657 "copy": true, 00:12:07.657 "nvme_iov_md": false 00:12:07.657 }, 00:12:07.657 "memory_domains": [ 00:12:07.657 { 00:12:07.657 "dma_device_id": "system", 00:12:07.657 "dma_device_type": 1 00:12:07.657 }, 00:12:07.657 { 00:12:07.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.657 "dma_device_type": 2 00:12:07.657 } 00:12:07.657 ], 00:12:07.657 "driver_specific": {} 00:12:07.657 } 00:12:07.657 ] 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.657 "name": "Existed_Raid", 00:12:07.657 "uuid": "21cbca5e-cac5-4c59-a907-560129cb8884", 00:12:07.657 "strip_size_kb": 64, 00:12:07.657 "state": "configuring", 00:12:07.657 "raid_level": "raid0", 00:12:07.657 "superblock": true, 00:12:07.657 "num_base_bdevs": 4, 00:12:07.657 "num_base_bdevs_discovered": 3, 00:12:07.657 "num_base_bdevs_operational": 4, 00:12:07.657 "base_bdevs_list": [ 00:12:07.657 { 00:12:07.657 "name": "BaseBdev1", 00:12:07.657 "uuid": "1d9618f7-bbce-4422-8471-2185ec8552e8", 00:12:07.657 "is_configured": true, 00:12:07.657 "data_offset": 2048, 00:12:07.657 "data_size": 63488 00:12:07.657 }, 00:12:07.657 { 00:12:07.657 "name": "BaseBdev2", 00:12:07.657 "uuid": "4685f27f-33ff-44f4-afb6-6d238b3a5025", 00:12:07.657 "is_configured": true, 00:12:07.657 "data_offset": 2048, 00:12:07.657 "data_size": 63488 00:12:07.657 }, 00:12:07.657 { 00:12:07.657 "name": "BaseBdev3", 00:12:07.657 "uuid": "f3d825f5-2f3a-4f9a-ac8c-005c26177166", 00:12:07.657 "is_configured": true, 00:12:07.657 "data_offset": 2048, 00:12:07.657 "data_size": 63488 00:12:07.657 }, 00:12:07.657 { 00:12:07.657 "name": "BaseBdev4", 00:12:07.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.657 "is_configured": false, 00:12:07.657 "data_offset": 0, 00:12:07.657 "data_size": 0 00:12:07.657 } 00:12:07.657 ] 00:12:07.657 }' 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.657 11:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.917 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:07.917 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.917 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.917 [2024-11-05 11:28:07.174792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:07.917 [2024-11-05 11:28:07.175181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:07.917 [2024-11-05 11:28:07.175235] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:07.917 [2024-11-05 11:28:07.175524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:07.917 [2024-11-05 11:28:07.175748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:07.917 [2024-11-05 11:28:07.175800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:07.917 BaseBdev4 00:12:07.917 [2024-11-05 11:28:07.176013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.917 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.917 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:07.917 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:07.917 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:07.917 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:07.917 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:07.917 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:07.917 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:07.917 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.917 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.917 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.176 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:08.176 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.176 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.176 [ 00:12:08.176 { 00:12:08.176 "name": "BaseBdev4", 00:12:08.176 "aliases": [ 00:12:08.176 "f8976f49-e98e-4d47-b183-dde32286f540" 00:12:08.176 ], 00:12:08.176 "product_name": "Malloc disk", 00:12:08.176 "block_size": 512, 00:12:08.176 "num_blocks": 65536, 00:12:08.176 "uuid": "f8976f49-e98e-4d47-b183-dde32286f540", 00:12:08.176 "assigned_rate_limits": { 00:12:08.176 "rw_ios_per_sec": 0, 00:12:08.176 "rw_mbytes_per_sec": 0, 00:12:08.176 "r_mbytes_per_sec": 0, 00:12:08.176 "w_mbytes_per_sec": 0 00:12:08.176 }, 00:12:08.176 "claimed": true, 00:12:08.176 "claim_type": "exclusive_write", 00:12:08.176 "zoned": false, 00:12:08.176 "supported_io_types": { 00:12:08.176 "read": true, 00:12:08.176 "write": true, 00:12:08.176 "unmap": true, 00:12:08.176 "flush": true, 00:12:08.176 "reset": true, 00:12:08.176 "nvme_admin": false, 00:12:08.176 "nvme_io": false, 00:12:08.176 "nvme_io_md": false, 00:12:08.176 "write_zeroes": true, 00:12:08.176 "zcopy": true, 00:12:08.176 "get_zone_info": false, 00:12:08.176 "zone_management": false, 00:12:08.176 "zone_append": false, 00:12:08.176 "compare": false, 00:12:08.176 "compare_and_write": false, 00:12:08.176 "abort": true, 00:12:08.176 "seek_hole": false, 00:12:08.176 "seek_data": false, 00:12:08.176 "copy": true, 00:12:08.176 "nvme_iov_md": false 00:12:08.176 }, 00:12:08.176 "memory_domains": [ 00:12:08.176 { 00:12:08.176 "dma_device_id": "system", 00:12:08.176 "dma_device_type": 1 00:12:08.176 }, 00:12:08.176 { 00:12:08.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.176 "dma_device_type": 2 00:12:08.176 } 00:12:08.176 ], 00:12:08.176 "driver_specific": {} 00:12:08.176 } 00:12:08.176 ] 00:12:08.176 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.176 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.177 "name": "Existed_Raid", 00:12:08.177 "uuid": "21cbca5e-cac5-4c59-a907-560129cb8884", 00:12:08.177 "strip_size_kb": 64, 00:12:08.177 "state": "online", 00:12:08.177 "raid_level": "raid0", 00:12:08.177 "superblock": true, 00:12:08.177 "num_base_bdevs": 4, 00:12:08.177 "num_base_bdevs_discovered": 4, 00:12:08.177 "num_base_bdevs_operational": 4, 00:12:08.177 "base_bdevs_list": [ 00:12:08.177 { 00:12:08.177 "name": "BaseBdev1", 00:12:08.177 "uuid": "1d9618f7-bbce-4422-8471-2185ec8552e8", 00:12:08.177 "is_configured": true, 00:12:08.177 "data_offset": 2048, 00:12:08.177 "data_size": 63488 00:12:08.177 }, 00:12:08.177 { 00:12:08.177 "name": "BaseBdev2", 00:12:08.177 "uuid": "4685f27f-33ff-44f4-afb6-6d238b3a5025", 00:12:08.177 "is_configured": true, 00:12:08.177 "data_offset": 2048, 00:12:08.177 "data_size": 63488 00:12:08.177 }, 00:12:08.177 { 00:12:08.177 "name": "BaseBdev3", 00:12:08.177 "uuid": "f3d825f5-2f3a-4f9a-ac8c-005c26177166", 00:12:08.177 "is_configured": true, 00:12:08.177 "data_offset": 2048, 00:12:08.177 "data_size": 63488 00:12:08.177 }, 00:12:08.177 { 00:12:08.177 "name": "BaseBdev4", 00:12:08.177 "uuid": "f8976f49-e98e-4d47-b183-dde32286f540", 00:12:08.177 "is_configured": true, 00:12:08.177 "data_offset": 2048, 00:12:08.177 "data_size": 63488 00:12:08.177 } 00:12:08.177 ] 00:12:08.177 }' 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.177 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.436 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:08.436 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:08.436 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:08.436 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:08.436 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:08.436 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:08.436 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:08.436 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.436 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.436 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:08.436 [2024-11-05 11:28:07.646377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.436 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.436 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:08.436 "name": "Existed_Raid", 00:12:08.436 "aliases": [ 00:12:08.436 "21cbca5e-cac5-4c59-a907-560129cb8884" 00:12:08.436 ], 00:12:08.436 "product_name": "Raid Volume", 00:12:08.436 "block_size": 512, 00:12:08.436 "num_blocks": 253952, 00:12:08.436 "uuid": "21cbca5e-cac5-4c59-a907-560129cb8884", 00:12:08.436 "assigned_rate_limits": { 00:12:08.436 "rw_ios_per_sec": 0, 00:12:08.436 "rw_mbytes_per_sec": 0, 00:12:08.436 "r_mbytes_per_sec": 0, 00:12:08.436 "w_mbytes_per_sec": 0 00:12:08.436 }, 00:12:08.436 "claimed": false, 00:12:08.436 "zoned": false, 00:12:08.436 "supported_io_types": { 00:12:08.436 "read": true, 00:12:08.436 "write": true, 00:12:08.436 "unmap": true, 00:12:08.436 "flush": true, 00:12:08.436 "reset": true, 00:12:08.436 "nvme_admin": false, 00:12:08.436 "nvme_io": false, 00:12:08.436 "nvme_io_md": false, 00:12:08.436 "write_zeroes": true, 00:12:08.436 "zcopy": false, 00:12:08.436 "get_zone_info": false, 00:12:08.436 "zone_management": false, 00:12:08.436 "zone_append": false, 00:12:08.436 "compare": false, 00:12:08.436 "compare_and_write": false, 00:12:08.436 "abort": false, 00:12:08.436 "seek_hole": false, 00:12:08.436 "seek_data": false, 00:12:08.436 "copy": false, 00:12:08.436 "nvme_iov_md": false 00:12:08.436 }, 00:12:08.436 "memory_domains": [ 00:12:08.436 { 00:12:08.436 "dma_device_id": "system", 00:12:08.436 "dma_device_type": 1 00:12:08.436 }, 00:12:08.436 { 00:12:08.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.436 "dma_device_type": 2 00:12:08.436 }, 00:12:08.436 { 00:12:08.436 "dma_device_id": "system", 00:12:08.436 "dma_device_type": 1 00:12:08.436 }, 00:12:08.436 { 00:12:08.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.436 "dma_device_type": 2 00:12:08.436 }, 00:12:08.436 { 00:12:08.436 "dma_device_id": "system", 00:12:08.436 "dma_device_type": 1 00:12:08.436 }, 00:12:08.436 { 00:12:08.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.436 "dma_device_type": 2 00:12:08.436 }, 00:12:08.436 { 00:12:08.436 "dma_device_id": "system", 00:12:08.436 "dma_device_type": 1 00:12:08.436 }, 00:12:08.436 { 00:12:08.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.436 "dma_device_type": 2 00:12:08.436 } 00:12:08.436 ], 00:12:08.436 "driver_specific": { 00:12:08.436 "raid": { 00:12:08.436 "uuid": "21cbca5e-cac5-4c59-a907-560129cb8884", 00:12:08.436 "strip_size_kb": 64, 00:12:08.436 "state": "online", 00:12:08.436 "raid_level": "raid0", 00:12:08.436 "superblock": true, 00:12:08.436 "num_base_bdevs": 4, 00:12:08.436 "num_base_bdevs_discovered": 4, 00:12:08.436 "num_base_bdevs_operational": 4, 00:12:08.436 "base_bdevs_list": [ 00:12:08.436 { 00:12:08.436 "name": "BaseBdev1", 00:12:08.436 "uuid": "1d9618f7-bbce-4422-8471-2185ec8552e8", 00:12:08.436 "is_configured": true, 00:12:08.436 "data_offset": 2048, 00:12:08.436 "data_size": 63488 00:12:08.436 }, 00:12:08.436 { 00:12:08.436 "name": "BaseBdev2", 00:12:08.436 "uuid": "4685f27f-33ff-44f4-afb6-6d238b3a5025", 00:12:08.436 "is_configured": true, 00:12:08.436 "data_offset": 2048, 00:12:08.436 "data_size": 63488 00:12:08.436 }, 00:12:08.436 { 00:12:08.436 "name": "BaseBdev3", 00:12:08.436 "uuid": "f3d825f5-2f3a-4f9a-ac8c-005c26177166", 00:12:08.436 "is_configured": true, 00:12:08.436 "data_offset": 2048, 00:12:08.436 "data_size": 63488 00:12:08.436 }, 00:12:08.436 { 00:12:08.436 "name": "BaseBdev4", 00:12:08.436 "uuid": "f8976f49-e98e-4d47-b183-dde32286f540", 00:12:08.436 "is_configured": true, 00:12:08.436 "data_offset": 2048, 00:12:08.436 "data_size": 63488 00:12:08.436 } 00:12:08.436 ] 00:12:08.436 } 00:12:08.436 } 00:12:08.436 }' 00:12:08.436 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:08.695 BaseBdev2 00:12:08.695 BaseBdev3 00:12:08.695 BaseBdev4' 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.695 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.696 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.992 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.992 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.992 11:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:08.992 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.992 11:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.992 [2024-11-05 11:28:07.993483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.992 [2024-11-05 11:28:07.993556] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.992 [2024-11-05 11:28:07.993646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.992 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.992 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:08.992 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:08.992 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:08.992 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:08.992 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:08.992 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:08.992 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.992 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:08.993 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.993 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.993 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.993 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.993 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.993 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.993 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.993 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.993 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.993 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.993 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.993 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.993 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.993 "name": "Existed_Raid", 00:12:08.993 "uuid": "21cbca5e-cac5-4c59-a907-560129cb8884", 00:12:08.993 "strip_size_kb": 64, 00:12:08.993 "state": "offline", 00:12:08.993 "raid_level": "raid0", 00:12:08.993 "superblock": true, 00:12:08.993 "num_base_bdevs": 4, 00:12:08.993 "num_base_bdevs_discovered": 3, 00:12:08.993 "num_base_bdevs_operational": 3, 00:12:08.993 "base_bdevs_list": [ 00:12:08.993 { 00:12:08.993 "name": null, 00:12:08.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.993 "is_configured": false, 00:12:08.993 "data_offset": 0, 00:12:08.993 "data_size": 63488 00:12:08.993 }, 00:12:08.993 { 00:12:08.993 "name": "BaseBdev2", 00:12:08.993 "uuid": "4685f27f-33ff-44f4-afb6-6d238b3a5025", 00:12:08.993 "is_configured": true, 00:12:08.993 "data_offset": 2048, 00:12:08.993 "data_size": 63488 00:12:08.993 }, 00:12:08.993 { 00:12:08.993 "name": "BaseBdev3", 00:12:08.993 "uuid": "f3d825f5-2f3a-4f9a-ac8c-005c26177166", 00:12:08.993 "is_configured": true, 00:12:08.993 "data_offset": 2048, 00:12:08.993 "data_size": 63488 00:12:08.993 }, 00:12:08.993 { 00:12:08.993 "name": "BaseBdev4", 00:12:08.993 "uuid": "f8976f49-e98e-4d47-b183-dde32286f540", 00:12:08.993 "is_configured": true, 00:12:08.993 "data_offset": 2048, 00:12:08.993 "data_size": 63488 00:12:08.993 } 00:12:08.993 ] 00:12:08.993 }' 00:12:08.993 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.993 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.284 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:09.284 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:09.284 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.284 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.284 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:09.284 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.284 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.542 [2024-11-05 11:28:08.566976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.542 [2024-11-05 11:28:08.717156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:09.542 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.802 [2024-11-05 11:28:08.871749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:09.802 [2024-11-05 11:28:08.871803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.802 11:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.802 BaseBdev2 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.802 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.062 [ 00:12:10.062 { 00:12:10.062 "name": "BaseBdev2", 00:12:10.062 "aliases": [ 00:12:10.062 "3f281b13-0af0-48d8-a33e-6fc230e1bd6c" 00:12:10.062 ], 00:12:10.062 "product_name": "Malloc disk", 00:12:10.062 "block_size": 512, 00:12:10.062 "num_blocks": 65536, 00:12:10.062 "uuid": "3f281b13-0af0-48d8-a33e-6fc230e1bd6c", 00:12:10.062 "assigned_rate_limits": { 00:12:10.062 "rw_ios_per_sec": 0, 00:12:10.062 "rw_mbytes_per_sec": 0, 00:12:10.062 "r_mbytes_per_sec": 0, 00:12:10.062 "w_mbytes_per_sec": 0 00:12:10.062 }, 00:12:10.062 "claimed": false, 00:12:10.062 "zoned": false, 00:12:10.062 "supported_io_types": { 00:12:10.062 "read": true, 00:12:10.062 "write": true, 00:12:10.062 "unmap": true, 00:12:10.062 "flush": true, 00:12:10.062 "reset": true, 00:12:10.062 "nvme_admin": false, 00:12:10.062 "nvme_io": false, 00:12:10.062 "nvme_io_md": false, 00:12:10.062 "write_zeroes": true, 00:12:10.062 "zcopy": true, 00:12:10.062 "get_zone_info": false, 00:12:10.062 "zone_management": false, 00:12:10.062 "zone_append": false, 00:12:10.062 "compare": false, 00:12:10.062 "compare_and_write": false, 00:12:10.062 "abort": true, 00:12:10.062 "seek_hole": false, 00:12:10.062 "seek_data": false, 00:12:10.062 "copy": true, 00:12:10.062 "nvme_iov_md": false 00:12:10.062 }, 00:12:10.062 "memory_domains": [ 00:12:10.062 { 00:12:10.062 "dma_device_id": "system", 00:12:10.062 "dma_device_type": 1 00:12:10.062 }, 00:12:10.062 { 00:12:10.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.062 "dma_device_type": 2 00:12:10.062 } 00:12:10.062 ], 00:12:10.062 "driver_specific": {} 00:12:10.062 } 00:12:10.062 ] 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.062 BaseBdev3 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.062 [ 00:12:10.062 { 00:12:10.062 "name": "BaseBdev3", 00:12:10.062 "aliases": [ 00:12:10.062 "1f33eace-a75f-4a2b-934f-3d9a20b3b5ad" 00:12:10.062 ], 00:12:10.062 "product_name": "Malloc disk", 00:12:10.062 "block_size": 512, 00:12:10.062 "num_blocks": 65536, 00:12:10.062 "uuid": "1f33eace-a75f-4a2b-934f-3d9a20b3b5ad", 00:12:10.062 "assigned_rate_limits": { 00:12:10.062 "rw_ios_per_sec": 0, 00:12:10.062 "rw_mbytes_per_sec": 0, 00:12:10.062 "r_mbytes_per_sec": 0, 00:12:10.062 "w_mbytes_per_sec": 0 00:12:10.062 }, 00:12:10.062 "claimed": false, 00:12:10.062 "zoned": false, 00:12:10.062 "supported_io_types": { 00:12:10.062 "read": true, 00:12:10.062 "write": true, 00:12:10.062 "unmap": true, 00:12:10.062 "flush": true, 00:12:10.062 "reset": true, 00:12:10.062 "nvme_admin": false, 00:12:10.062 "nvme_io": false, 00:12:10.062 "nvme_io_md": false, 00:12:10.062 "write_zeroes": true, 00:12:10.062 "zcopy": true, 00:12:10.062 "get_zone_info": false, 00:12:10.062 "zone_management": false, 00:12:10.062 "zone_append": false, 00:12:10.062 "compare": false, 00:12:10.062 "compare_and_write": false, 00:12:10.062 "abort": true, 00:12:10.062 "seek_hole": false, 00:12:10.062 "seek_data": false, 00:12:10.062 "copy": true, 00:12:10.062 "nvme_iov_md": false 00:12:10.062 }, 00:12:10.062 "memory_domains": [ 00:12:10.062 { 00:12:10.062 "dma_device_id": "system", 00:12:10.062 "dma_device_type": 1 00:12:10.062 }, 00:12:10.062 { 00:12:10.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.062 "dma_device_type": 2 00:12:10.062 } 00:12:10.062 ], 00:12:10.062 "driver_specific": {} 00:12:10.062 } 00:12:10.062 ] 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.062 BaseBdev4 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.062 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.062 [ 00:12:10.062 { 00:12:10.062 "name": "BaseBdev4", 00:12:10.062 "aliases": [ 00:12:10.062 "1c27984c-e68f-42ba-87ec-8123d77c1fc8" 00:12:10.062 ], 00:12:10.062 "product_name": "Malloc disk", 00:12:10.062 "block_size": 512, 00:12:10.062 "num_blocks": 65536, 00:12:10.062 "uuid": "1c27984c-e68f-42ba-87ec-8123d77c1fc8", 00:12:10.062 "assigned_rate_limits": { 00:12:10.062 "rw_ios_per_sec": 0, 00:12:10.062 "rw_mbytes_per_sec": 0, 00:12:10.063 "r_mbytes_per_sec": 0, 00:12:10.063 "w_mbytes_per_sec": 0 00:12:10.063 }, 00:12:10.063 "claimed": false, 00:12:10.063 "zoned": false, 00:12:10.063 "supported_io_types": { 00:12:10.063 "read": true, 00:12:10.063 "write": true, 00:12:10.063 "unmap": true, 00:12:10.063 "flush": true, 00:12:10.063 "reset": true, 00:12:10.063 "nvme_admin": false, 00:12:10.063 "nvme_io": false, 00:12:10.063 "nvme_io_md": false, 00:12:10.063 "write_zeroes": true, 00:12:10.063 "zcopy": true, 00:12:10.063 "get_zone_info": false, 00:12:10.063 "zone_management": false, 00:12:10.063 "zone_append": false, 00:12:10.063 "compare": false, 00:12:10.063 "compare_and_write": false, 00:12:10.063 "abort": true, 00:12:10.063 "seek_hole": false, 00:12:10.063 "seek_data": false, 00:12:10.063 "copy": true, 00:12:10.063 "nvme_iov_md": false 00:12:10.063 }, 00:12:10.063 "memory_domains": [ 00:12:10.063 { 00:12:10.063 "dma_device_id": "system", 00:12:10.063 "dma_device_type": 1 00:12:10.063 }, 00:12:10.063 { 00:12:10.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.063 "dma_device_type": 2 00:12:10.063 } 00:12:10.063 ], 00:12:10.063 "driver_specific": {} 00:12:10.063 } 00:12:10.063 ] 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.063 [2024-11-05 11:28:09.259383] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:10.063 [2024-11-05 11:28:09.259464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:10.063 [2024-11-05 11:28:09.259503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.063 [2024-11-05 11:28:09.261274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.063 [2024-11-05 11:28:09.261377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.063 "name": "Existed_Raid", 00:12:10.063 "uuid": "86a2daff-a5ac-4bfe-930a-c01f59c82afa", 00:12:10.063 "strip_size_kb": 64, 00:12:10.063 "state": "configuring", 00:12:10.063 "raid_level": "raid0", 00:12:10.063 "superblock": true, 00:12:10.063 "num_base_bdevs": 4, 00:12:10.063 "num_base_bdevs_discovered": 3, 00:12:10.063 "num_base_bdevs_operational": 4, 00:12:10.063 "base_bdevs_list": [ 00:12:10.063 { 00:12:10.063 "name": "BaseBdev1", 00:12:10.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.063 "is_configured": false, 00:12:10.063 "data_offset": 0, 00:12:10.063 "data_size": 0 00:12:10.063 }, 00:12:10.063 { 00:12:10.063 "name": "BaseBdev2", 00:12:10.063 "uuid": "3f281b13-0af0-48d8-a33e-6fc230e1bd6c", 00:12:10.063 "is_configured": true, 00:12:10.063 "data_offset": 2048, 00:12:10.063 "data_size": 63488 00:12:10.063 }, 00:12:10.063 { 00:12:10.063 "name": "BaseBdev3", 00:12:10.063 "uuid": "1f33eace-a75f-4a2b-934f-3d9a20b3b5ad", 00:12:10.063 "is_configured": true, 00:12:10.063 "data_offset": 2048, 00:12:10.063 "data_size": 63488 00:12:10.063 }, 00:12:10.063 { 00:12:10.063 "name": "BaseBdev4", 00:12:10.063 "uuid": "1c27984c-e68f-42ba-87ec-8123d77c1fc8", 00:12:10.063 "is_configured": true, 00:12:10.063 "data_offset": 2048, 00:12:10.063 "data_size": 63488 00:12:10.063 } 00:12:10.063 ] 00:12:10.063 }' 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.063 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.631 [2024-11-05 11:28:09.710699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.631 "name": "Existed_Raid", 00:12:10.631 "uuid": "86a2daff-a5ac-4bfe-930a-c01f59c82afa", 00:12:10.631 "strip_size_kb": 64, 00:12:10.631 "state": "configuring", 00:12:10.631 "raid_level": "raid0", 00:12:10.631 "superblock": true, 00:12:10.631 "num_base_bdevs": 4, 00:12:10.631 "num_base_bdevs_discovered": 2, 00:12:10.631 "num_base_bdevs_operational": 4, 00:12:10.631 "base_bdevs_list": [ 00:12:10.631 { 00:12:10.631 "name": "BaseBdev1", 00:12:10.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.631 "is_configured": false, 00:12:10.631 "data_offset": 0, 00:12:10.631 "data_size": 0 00:12:10.631 }, 00:12:10.631 { 00:12:10.631 "name": null, 00:12:10.631 "uuid": "3f281b13-0af0-48d8-a33e-6fc230e1bd6c", 00:12:10.631 "is_configured": false, 00:12:10.631 "data_offset": 0, 00:12:10.631 "data_size": 63488 00:12:10.631 }, 00:12:10.631 { 00:12:10.631 "name": "BaseBdev3", 00:12:10.631 "uuid": "1f33eace-a75f-4a2b-934f-3d9a20b3b5ad", 00:12:10.631 "is_configured": true, 00:12:10.631 "data_offset": 2048, 00:12:10.631 "data_size": 63488 00:12:10.631 }, 00:12:10.631 { 00:12:10.631 "name": "BaseBdev4", 00:12:10.631 "uuid": "1c27984c-e68f-42ba-87ec-8123d77c1fc8", 00:12:10.631 "is_configured": true, 00:12:10.631 "data_offset": 2048, 00:12:10.631 "data_size": 63488 00:12:10.631 } 00:12:10.631 ] 00:12:10.631 }' 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.631 11:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.890 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.890 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.890 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.890 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.151 [2024-11-05 11:28:10.246883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.151 BaseBdev1 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.151 [ 00:12:11.151 { 00:12:11.151 "name": "BaseBdev1", 00:12:11.151 "aliases": [ 00:12:11.151 "35b12925-ced3-42a0-8d58-7de4c5ce5626" 00:12:11.151 ], 00:12:11.151 "product_name": "Malloc disk", 00:12:11.151 "block_size": 512, 00:12:11.151 "num_blocks": 65536, 00:12:11.151 "uuid": "35b12925-ced3-42a0-8d58-7de4c5ce5626", 00:12:11.151 "assigned_rate_limits": { 00:12:11.151 "rw_ios_per_sec": 0, 00:12:11.151 "rw_mbytes_per_sec": 0, 00:12:11.151 "r_mbytes_per_sec": 0, 00:12:11.151 "w_mbytes_per_sec": 0 00:12:11.151 }, 00:12:11.151 "claimed": true, 00:12:11.151 "claim_type": "exclusive_write", 00:12:11.151 "zoned": false, 00:12:11.151 "supported_io_types": { 00:12:11.151 "read": true, 00:12:11.151 "write": true, 00:12:11.151 "unmap": true, 00:12:11.151 "flush": true, 00:12:11.151 "reset": true, 00:12:11.151 "nvme_admin": false, 00:12:11.151 "nvme_io": false, 00:12:11.151 "nvme_io_md": false, 00:12:11.151 "write_zeroes": true, 00:12:11.151 "zcopy": true, 00:12:11.151 "get_zone_info": false, 00:12:11.151 "zone_management": false, 00:12:11.151 "zone_append": false, 00:12:11.151 "compare": false, 00:12:11.151 "compare_and_write": false, 00:12:11.151 "abort": true, 00:12:11.151 "seek_hole": false, 00:12:11.151 "seek_data": false, 00:12:11.151 "copy": true, 00:12:11.151 "nvme_iov_md": false 00:12:11.151 }, 00:12:11.151 "memory_domains": [ 00:12:11.151 { 00:12:11.151 "dma_device_id": "system", 00:12:11.151 "dma_device_type": 1 00:12:11.151 }, 00:12:11.151 { 00:12:11.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.151 "dma_device_type": 2 00:12:11.151 } 00:12:11.151 ], 00:12:11.151 "driver_specific": {} 00:12:11.151 } 00:12:11.151 ] 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.151 "name": "Existed_Raid", 00:12:11.151 "uuid": "86a2daff-a5ac-4bfe-930a-c01f59c82afa", 00:12:11.151 "strip_size_kb": 64, 00:12:11.151 "state": "configuring", 00:12:11.151 "raid_level": "raid0", 00:12:11.151 "superblock": true, 00:12:11.151 "num_base_bdevs": 4, 00:12:11.151 "num_base_bdevs_discovered": 3, 00:12:11.151 "num_base_bdevs_operational": 4, 00:12:11.151 "base_bdevs_list": [ 00:12:11.151 { 00:12:11.151 "name": "BaseBdev1", 00:12:11.151 "uuid": "35b12925-ced3-42a0-8d58-7de4c5ce5626", 00:12:11.151 "is_configured": true, 00:12:11.151 "data_offset": 2048, 00:12:11.151 "data_size": 63488 00:12:11.151 }, 00:12:11.151 { 00:12:11.151 "name": null, 00:12:11.151 "uuid": "3f281b13-0af0-48d8-a33e-6fc230e1bd6c", 00:12:11.151 "is_configured": false, 00:12:11.151 "data_offset": 0, 00:12:11.151 "data_size": 63488 00:12:11.151 }, 00:12:11.151 { 00:12:11.151 "name": "BaseBdev3", 00:12:11.151 "uuid": "1f33eace-a75f-4a2b-934f-3d9a20b3b5ad", 00:12:11.151 "is_configured": true, 00:12:11.151 "data_offset": 2048, 00:12:11.151 "data_size": 63488 00:12:11.151 }, 00:12:11.151 { 00:12:11.151 "name": "BaseBdev4", 00:12:11.151 "uuid": "1c27984c-e68f-42ba-87ec-8123d77c1fc8", 00:12:11.151 "is_configured": true, 00:12:11.151 "data_offset": 2048, 00:12:11.151 "data_size": 63488 00:12:11.151 } 00:12:11.151 ] 00:12:11.151 }' 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.151 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.720 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.721 [2024-11-05 11:28:10.766102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.721 "name": "Existed_Raid", 00:12:11.721 "uuid": "86a2daff-a5ac-4bfe-930a-c01f59c82afa", 00:12:11.721 "strip_size_kb": 64, 00:12:11.721 "state": "configuring", 00:12:11.721 "raid_level": "raid0", 00:12:11.721 "superblock": true, 00:12:11.721 "num_base_bdevs": 4, 00:12:11.721 "num_base_bdevs_discovered": 2, 00:12:11.721 "num_base_bdevs_operational": 4, 00:12:11.721 "base_bdevs_list": [ 00:12:11.721 { 00:12:11.721 "name": "BaseBdev1", 00:12:11.721 "uuid": "35b12925-ced3-42a0-8d58-7de4c5ce5626", 00:12:11.721 "is_configured": true, 00:12:11.721 "data_offset": 2048, 00:12:11.721 "data_size": 63488 00:12:11.721 }, 00:12:11.721 { 00:12:11.721 "name": null, 00:12:11.721 "uuid": "3f281b13-0af0-48d8-a33e-6fc230e1bd6c", 00:12:11.721 "is_configured": false, 00:12:11.721 "data_offset": 0, 00:12:11.721 "data_size": 63488 00:12:11.721 }, 00:12:11.721 { 00:12:11.721 "name": null, 00:12:11.721 "uuid": "1f33eace-a75f-4a2b-934f-3d9a20b3b5ad", 00:12:11.721 "is_configured": false, 00:12:11.721 "data_offset": 0, 00:12:11.721 "data_size": 63488 00:12:11.721 }, 00:12:11.721 { 00:12:11.721 "name": "BaseBdev4", 00:12:11.721 "uuid": "1c27984c-e68f-42ba-87ec-8123d77c1fc8", 00:12:11.721 "is_configured": true, 00:12:11.721 "data_offset": 2048, 00:12:11.721 "data_size": 63488 00:12:11.721 } 00:12:11.721 ] 00:12:11.721 }' 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.721 11:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.980 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.980 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.980 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.980 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:11.980 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.980 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.240 [2024-11-05 11:28:11.261265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.240 "name": "Existed_Raid", 00:12:12.240 "uuid": "86a2daff-a5ac-4bfe-930a-c01f59c82afa", 00:12:12.240 "strip_size_kb": 64, 00:12:12.240 "state": "configuring", 00:12:12.240 "raid_level": "raid0", 00:12:12.240 "superblock": true, 00:12:12.240 "num_base_bdevs": 4, 00:12:12.240 "num_base_bdevs_discovered": 3, 00:12:12.240 "num_base_bdevs_operational": 4, 00:12:12.240 "base_bdevs_list": [ 00:12:12.240 { 00:12:12.240 "name": "BaseBdev1", 00:12:12.240 "uuid": "35b12925-ced3-42a0-8d58-7de4c5ce5626", 00:12:12.240 "is_configured": true, 00:12:12.240 "data_offset": 2048, 00:12:12.240 "data_size": 63488 00:12:12.240 }, 00:12:12.240 { 00:12:12.240 "name": null, 00:12:12.240 "uuid": "3f281b13-0af0-48d8-a33e-6fc230e1bd6c", 00:12:12.240 "is_configured": false, 00:12:12.240 "data_offset": 0, 00:12:12.240 "data_size": 63488 00:12:12.240 }, 00:12:12.240 { 00:12:12.240 "name": "BaseBdev3", 00:12:12.240 "uuid": "1f33eace-a75f-4a2b-934f-3d9a20b3b5ad", 00:12:12.240 "is_configured": true, 00:12:12.240 "data_offset": 2048, 00:12:12.240 "data_size": 63488 00:12:12.240 }, 00:12:12.240 { 00:12:12.240 "name": "BaseBdev4", 00:12:12.240 "uuid": "1c27984c-e68f-42ba-87ec-8123d77c1fc8", 00:12:12.240 "is_configured": true, 00:12:12.240 "data_offset": 2048, 00:12:12.240 "data_size": 63488 00:12:12.240 } 00:12:12.240 ] 00:12:12.240 }' 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.240 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.500 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.500 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:12.500 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.500 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.500 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.500 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:12.500 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:12.500 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.500 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.500 [2024-11-05 11:28:11.712500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.759 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.759 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:12.759 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.759 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.759 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:12.759 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.759 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.759 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.760 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.760 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.760 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.760 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.760 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.760 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.760 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.760 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.760 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.760 "name": "Existed_Raid", 00:12:12.760 "uuid": "86a2daff-a5ac-4bfe-930a-c01f59c82afa", 00:12:12.760 "strip_size_kb": 64, 00:12:12.760 "state": "configuring", 00:12:12.760 "raid_level": "raid0", 00:12:12.760 "superblock": true, 00:12:12.760 "num_base_bdevs": 4, 00:12:12.760 "num_base_bdevs_discovered": 2, 00:12:12.760 "num_base_bdevs_operational": 4, 00:12:12.760 "base_bdevs_list": [ 00:12:12.760 { 00:12:12.760 "name": null, 00:12:12.760 "uuid": "35b12925-ced3-42a0-8d58-7de4c5ce5626", 00:12:12.760 "is_configured": false, 00:12:12.760 "data_offset": 0, 00:12:12.760 "data_size": 63488 00:12:12.760 }, 00:12:12.760 { 00:12:12.760 "name": null, 00:12:12.760 "uuid": "3f281b13-0af0-48d8-a33e-6fc230e1bd6c", 00:12:12.760 "is_configured": false, 00:12:12.760 "data_offset": 0, 00:12:12.760 "data_size": 63488 00:12:12.760 }, 00:12:12.760 { 00:12:12.760 "name": "BaseBdev3", 00:12:12.760 "uuid": "1f33eace-a75f-4a2b-934f-3d9a20b3b5ad", 00:12:12.760 "is_configured": true, 00:12:12.760 "data_offset": 2048, 00:12:12.760 "data_size": 63488 00:12:12.760 }, 00:12:12.760 { 00:12:12.760 "name": "BaseBdev4", 00:12:12.760 "uuid": "1c27984c-e68f-42ba-87ec-8123d77c1fc8", 00:12:12.760 "is_configured": true, 00:12:12.760 "data_offset": 2048, 00:12:12.760 "data_size": 63488 00:12:12.760 } 00:12:12.760 ] 00:12:12.760 }' 00:12:12.760 11:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.760 11:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.019 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.019 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:13.019 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.019 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.019 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.335 [2024-11-05 11:28:12.326620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.335 "name": "Existed_Raid", 00:12:13.335 "uuid": "86a2daff-a5ac-4bfe-930a-c01f59c82afa", 00:12:13.335 "strip_size_kb": 64, 00:12:13.335 "state": "configuring", 00:12:13.335 "raid_level": "raid0", 00:12:13.335 "superblock": true, 00:12:13.335 "num_base_bdevs": 4, 00:12:13.335 "num_base_bdevs_discovered": 3, 00:12:13.335 "num_base_bdevs_operational": 4, 00:12:13.335 "base_bdevs_list": [ 00:12:13.335 { 00:12:13.335 "name": null, 00:12:13.335 "uuid": "35b12925-ced3-42a0-8d58-7de4c5ce5626", 00:12:13.335 "is_configured": false, 00:12:13.335 "data_offset": 0, 00:12:13.335 "data_size": 63488 00:12:13.335 }, 00:12:13.335 { 00:12:13.335 "name": "BaseBdev2", 00:12:13.335 "uuid": "3f281b13-0af0-48d8-a33e-6fc230e1bd6c", 00:12:13.335 "is_configured": true, 00:12:13.335 "data_offset": 2048, 00:12:13.335 "data_size": 63488 00:12:13.335 }, 00:12:13.335 { 00:12:13.335 "name": "BaseBdev3", 00:12:13.335 "uuid": "1f33eace-a75f-4a2b-934f-3d9a20b3b5ad", 00:12:13.335 "is_configured": true, 00:12:13.335 "data_offset": 2048, 00:12:13.335 "data_size": 63488 00:12:13.335 }, 00:12:13.335 { 00:12:13.335 "name": "BaseBdev4", 00:12:13.335 "uuid": "1c27984c-e68f-42ba-87ec-8123d77c1fc8", 00:12:13.335 "is_configured": true, 00:12:13.335 "data_offset": 2048, 00:12:13.335 "data_size": 63488 00:12:13.335 } 00:12:13.335 ] 00:12:13.335 }' 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.335 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.595 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:13.595 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.595 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.595 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.595 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.595 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:13.595 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.595 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:13.595 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.595 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.595 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.595 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 35b12925-ced3-42a0-8d58-7de4c5ce5626 00:12:13.595 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.595 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.854 [2024-11-05 11:28:12.886247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:13.854 [2024-11-05 11:28:12.886555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:13.854 [2024-11-05 11:28:12.886605] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:13.854 [2024-11-05 11:28:12.886892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:13.854 NewBaseBdev 00:12:13.854 [2024-11-05 11:28:12.887103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:13.854 [2024-11-05 11:28:12.887120] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:13.854 [2024-11-05 11:28:12.887283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.854 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.854 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:13.854 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:13.854 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:13.854 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:13.854 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:13.854 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:13.854 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.855 [ 00:12:13.855 { 00:12:13.855 "name": "NewBaseBdev", 00:12:13.855 "aliases": [ 00:12:13.855 "35b12925-ced3-42a0-8d58-7de4c5ce5626" 00:12:13.855 ], 00:12:13.855 "product_name": "Malloc disk", 00:12:13.855 "block_size": 512, 00:12:13.855 "num_blocks": 65536, 00:12:13.855 "uuid": "35b12925-ced3-42a0-8d58-7de4c5ce5626", 00:12:13.855 "assigned_rate_limits": { 00:12:13.855 "rw_ios_per_sec": 0, 00:12:13.855 "rw_mbytes_per_sec": 0, 00:12:13.855 "r_mbytes_per_sec": 0, 00:12:13.855 "w_mbytes_per_sec": 0 00:12:13.855 }, 00:12:13.855 "claimed": true, 00:12:13.855 "claim_type": "exclusive_write", 00:12:13.855 "zoned": false, 00:12:13.855 "supported_io_types": { 00:12:13.855 "read": true, 00:12:13.855 "write": true, 00:12:13.855 "unmap": true, 00:12:13.855 "flush": true, 00:12:13.855 "reset": true, 00:12:13.855 "nvme_admin": false, 00:12:13.855 "nvme_io": false, 00:12:13.855 "nvme_io_md": false, 00:12:13.855 "write_zeroes": true, 00:12:13.855 "zcopy": true, 00:12:13.855 "get_zone_info": false, 00:12:13.855 "zone_management": false, 00:12:13.855 "zone_append": false, 00:12:13.855 "compare": false, 00:12:13.855 "compare_and_write": false, 00:12:13.855 "abort": true, 00:12:13.855 "seek_hole": false, 00:12:13.855 "seek_data": false, 00:12:13.855 "copy": true, 00:12:13.855 "nvme_iov_md": false 00:12:13.855 }, 00:12:13.855 "memory_domains": [ 00:12:13.855 { 00:12:13.855 "dma_device_id": "system", 00:12:13.855 "dma_device_type": 1 00:12:13.855 }, 00:12:13.855 { 00:12:13.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.855 "dma_device_type": 2 00:12:13.855 } 00:12:13.855 ], 00:12:13.855 "driver_specific": {} 00:12:13.855 } 00:12:13.855 ] 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.855 "name": "Existed_Raid", 00:12:13.855 "uuid": "86a2daff-a5ac-4bfe-930a-c01f59c82afa", 00:12:13.855 "strip_size_kb": 64, 00:12:13.855 "state": "online", 00:12:13.855 "raid_level": "raid0", 00:12:13.855 "superblock": true, 00:12:13.855 "num_base_bdevs": 4, 00:12:13.855 "num_base_bdevs_discovered": 4, 00:12:13.855 "num_base_bdevs_operational": 4, 00:12:13.855 "base_bdevs_list": [ 00:12:13.855 { 00:12:13.855 "name": "NewBaseBdev", 00:12:13.855 "uuid": "35b12925-ced3-42a0-8d58-7de4c5ce5626", 00:12:13.855 "is_configured": true, 00:12:13.855 "data_offset": 2048, 00:12:13.855 "data_size": 63488 00:12:13.855 }, 00:12:13.855 { 00:12:13.855 "name": "BaseBdev2", 00:12:13.855 "uuid": "3f281b13-0af0-48d8-a33e-6fc230e1bd6c", 00:12:13.855 "is_configured": true, 00:12:13.855 "data_offset": 2048, 00:12:13.855 "data_size": 63488 00:12:13.855 }, 00:12:13.855 { 00:12:13.855 "name": "BaseBdev3", 00:12:13.855 "uuid": "1f33eace-a75f-4a2b-934f-3d9a20b3b5ad", 00:12:13.855 "is_configured": true, 00:12:13.855 "data_offset": 2048, 00:12:13.855 "data_size": 63488 00:12:13.855 }, 00:12:13.855 { 00:12:13.855 "name": "BaseBdev4", 00:12:13.855 "uuid": "1c27984c-e68f-42ba-87ec-8123d77c1fc8", 00:12:13.855 "is_configured": true, 00:12:13.855 "data_offset": 2048, 00:12:13.855 "data_size": 63488 00:12:13.855 } 00:12:13.855 ] 00:12:13.855 }' 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.855 11:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.115 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:14.115 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:14.115 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:14.115 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:14.115 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:14.115 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:14.115 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:14.115 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:14.115 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.115 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.115 [2024-11-05 11:28:13.357868] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.115 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.115 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:14.115 "name": "Existed_Raid", 00:12:14.115 "aliases": [ 00:12:14.115 "86a2daff-a5ac-4bfe-930a-c01f59c82afa" 00:12:14.115 ], 00:12:14.115 "product_name": "Raid Volume", 00:12:14.115 "block_size": 512, 00:12:14.115 "num_blocks": 253952, 00:12:14.115 "uuid": "86a2daff-a5ac-4bfe-930a-c01f59c82afa", 00:12:14.115 "assigned_rate_limits": { 00:12:14.115 "rw_ios_per_sec": 0, 00:12:14.115 "rw_mbytes_per_sec": 0, 00:12:14.115 "r_mbytes_per_sec": 0, 00:12:14.115 "w_mbytes_per_sec": 0 00:12:14.115 }, 00:12:14.115 "claimed": false, 00:12:14.115 "zoned": false, 00:12:14.115 "supported_io_types": { 00:12:14.115 "read": true, 00:12:14.115 "write": true, 00:12:14.115 "unmap": true, 00:12:14.115 "flush": true, 00:12:14.115 "reset": true, 00:12:14.115 "nvme_admin": false, 00:12:14.115 "nvme_io": false, 00:12:14.115 "nvme_io_md": false, 00:12:14.115 "write_zeroes": true, 00:12:14.115 "zcopy": false, 00:12:14.115 "get_zone_info": false, 00:12:14.115 "zone_management": false, 00:12:14.115 "zone_append": false, 00:12:14.115 "compare": false, 00:12:14.115 "compare_and_write": false, 00:12:14.115 "abort": false, 00:12:14.115 "seek_hole": false, 00:12:14.115 "seek_data": false, 00:12:14.115 "copy": false, 00:12:14.115 "nvme_iov_md": false 00:12:14.115 }, 00:12:14.115 "memory_domains": [ 00:12:14.115 { 00:12:14.115 "dma_device_id": "system", 00:12:14.115 "dma_device_type": 1 00:12:14.115 }, 00:12:14.115 { 00:12:14.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.115 "dma_device_type": 2 00:12:14.115 }, 00:12:14.115 { 00:12:14.115 "dma_device_id": "system", 00:12:14.115 "dma_device_type": 1 00:12:14.115 }, 00:12:14.115 { 00:12:14.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.115 "dma_device_type": 2 00:12:14.115 }, 00:12:14.115 { 00:12:14.115 "dma_device_id": "system", 00:12:14.115 "dma_device_type": 1 00:12:14.115 }, 00:12:14.115 { 00:12:14.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.115 "dma_device_type": 2 00:12:14.115 }, 00:12:14.115 { 00:12:14.115 "dma_device_id": "system", 00:12:14.115 "dma_device_type": 1 00:12:14.115 }, 00:12:14.115 { 00:12:14.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.115 "dma_device_type": 2 00:12:14.115 } 00:12:14.115 ], 00:12:14.115 "driver_specific": { 00:12:14.115 "raid": { 00:12:14.115 "uuid": "86a2daff-a5ac-4bfe-930a-c01f59c82afa", 00:12:14.115 "strip_size_kb": 64, 00:12:14.115 "state": "online", 00:12:14.115 "raid_level": "raid0", 00:12:14.115 "superblock": true, 00:12:14.115 "num_base_bdevs": 4, 00:12:14.115 "num_base_bdevs_discovered": 4, 00:12:14.115 "num_base_bdevs_operational": 4, 00:12:14.115 "base_bdevs_list": [ 00:12:14.115 { 00:12:14.115 "name": "NewBaseBdev", 00:12:14.115 "uuid": "35b12925-ced3-42a0-8d58-7de4c5ce5626", 00:12:14.115 "is_configured": true, 00:12:14.115 "data_offset": 2048, 00:12:14.115 "data_size": 63488 00:12:14.115 }, 00:12:14.115 { 00:12:14.115 "name": "BaseBdev2", 00:12:14.115 "uuid": "3f281b13-0af0-48d8-a33e-6fc230e1bd6c", 00:12:14.115 "is_configured": true, 00:12:14.115 "data_offset": 2048, 00:12:14.115 "data_size": 63488 00:12:14.115 }, 00:12:14.115 { 00:12:14.115 "name": "BaseBdev3", 00:12:14.115 "uuid": "1f33eace-a75f-4a2b-934f-3d9a20b3b5ad", 00:12:14.115 "is_configured": true, 00:12:14.115 "data_offset": 2048, 00:12:14.115 "data_size": 63488 00:12:14.115 }, 00:12:14.115 { 00:12:14.115 "name": "BaseBdev4", 00:12:14.115 "uuid": "1c27984c-e68f-42ba-87ec-8123d77c1fc8", 00:12:14.115 "is_configured": true, 00:12:14.115 "data_offset": 2048, 00:12:14.115 "data_size": 63488 00:12:14.115 } 00:12:14.115 ] 00:12:14.115 } 00:12:14.115 } 00:12:14.115 }' 00:12:14.115 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:14.375 BaseBdev2 00:12:14.375 BaseBdev3 00:12:14.375 BaseBdev4' 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.375 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.636 [2024-11-05 11:28:13.692958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:14.636 [2024-11-05 11:28:13.692988] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.636 [2024-11-05 11:28:13.693058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.636 [2024-11-05 11:28:13.693125] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.636 [2024-11-05 11:28:13.693135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70182 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 70182 ']' 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 70182 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70182 00:12:14.636 killing process with pid 70182 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70182' 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 70182 00:12:14.636 [2024-11-05 11:28:13.729684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.636 11:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 70182 00:12:14.895 [2024-11-05 11:28:14.120255] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:16.276 11:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:16.276 00:12:16.276 real 0m11.452s 00:12:16.276 user 0m18.238s 00:12:16.276 sys 0m2.028s 00:12:16.276 11:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:16.276 ************************************ 00:12:16.276 END TEST raid_state_function_test_sb 00:12:16.276 ************************************ 00:12:16.276 11:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.276 11:28:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:16.276 11:28:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:16.276 11:28:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:16.276 11:28:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:16.276 ************************************ 00:12:16.276 START TEST raid_superblock_test 00:12:16.276 ************************************ 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:16.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70849 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70849 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 70849 ']' 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:16.276 11:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.276 [2024-11-05 11:28:15.367291] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:12:16.276 [2024-11-05 11:28:15.367433] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70849 ] 00:12:16.276 [2024-11-05 11:28:15.540202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.536 [2024-11-05 11:28:15.656326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.795 [2024-11-05 11:28:15.857208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.795 [2024-11-05 11:28:15.857273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.055 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:17.055 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:17.055 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:17.055 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:17.055 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:17.055 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:17.055 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:17.055 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:17.055 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.056 malloc1 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.056 [2024-11-05 11:28:16.242502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:17.056 [2024-11-05 11:28:16.242612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.056 [2024-11-05 11:28:16.242653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:17.056 [2024-11-05 11:28:16.242681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.056 [2024-11-05 11:28:16.244962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.056 [2024-11-05 11:28:16.245033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:17.056 pt1 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.056 malloc2 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.056 [2024-11-05 11:28:16.297170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:17.056 [2024-11-05 11:28:16.297224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.056 [2024-11-05 11:28:16.297261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:17.056 [2024-11-05 11:28:16.297269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.056 [2024-11-05 11:28:16.299310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.056 [2024-11-05 11:28:16.299346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:17.056 pt2 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.056 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.317 malloc3 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.317 [2024-11-05 11:28:16.361532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:17.317 [2024-11-05 11:28:16.361630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.317 [2024-11-05 11:28:16.361686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:17.317 [2024-11-05 11:28:16.361719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.317 [2024-11-05 11:28:16.363949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.317 [2024-11-05 11:28:16.364029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:17.317 pt3 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.317 malloc4 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.317 [2024-11-05 11:28:16.417594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:17.317 [2024-11-05 11:28:16.417688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.317 [2024-11-05 11:28:16.417743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:17.317 [2024-11-05 11:28:16.417771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.317 [2024-11-05 11:28:16.419822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.317 [2024-11-05 11:28:16.419895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:17.317 pt4 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.317 [2024-11-05 11:28:16.429606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:17.317 [2024-11-05 11:28:16.431444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:17.317 [2024-11-05 11:28:16.431567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:17.317 [2024-11-05 11:28:16.431651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:17.317 [2024-11-05 11:28:16.431857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:17.317 [2024-11-05 11:28:16.431904] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:17.317 [2024-11-05 11:28:16.432202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:17.317 [2024-11-05 11:28:16.432407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:17.317 [2024-11-05 11:28:16.432452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:17.317 [2024-11-05 11:28:16.432643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.317 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.317 "name": "raid_bdev1", 00:12:17.317 "uuid": "4f36129d-f0f3-4e18-8f57-17d5ca102a41", 00:12:17.317 "strip_size_kb": 64, 00:12:17.317 "state": "online", 00:12:17.317 "raid_level": "raid0", 00:12:17.317 "superblock": true, 00:12:17.317 "num_base_bdevs": 4, 00:12:17.317 "num_base_bdevs_discovered": 4, 00:12:17.317 "num_base_bdevs_operational": 4, 00:12:17.317 "base_bdevs_list": [ 00:12:17.317 { 00:12:17.317 "name": "pt1", 00:12:17.317 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.317 "is_configured": true, 00:12:17.317 "data_offset": 2048, 00:12:17.317 "data_size": 63488 00:12:17.317 }, 00:12:17.317 { 00:12:17.317 "name": "pt2", 00:12:17.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.317 "is_configured": true, 00:12:17.317 "data_offset": 2048, 00:12:17.317 "data_size": 63488 00:12:17.317 }, 00:12:17.317 { 00:12:17.317 "name": "pt3", 00:12:17.317 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.317 "is_configured": true, 00:12:17.317 "data_offset": 2048, 00:12:17.317 "data_size": 63488 00:12:17.317 }, 00:12:17.317 { 00:12:17.317 "name": "pt4", 00:12:17.317 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.317 "is_configured": true, 00:12:17.317 "data_offset": 2048, 00:12:17.317 "data_size": 63488 00:12:17.317 } 00:12:17.318 ] 00:12:17.318 }' 00:12:17.318 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.318 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:17.888 [2024-11-05 11:28:16.869236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:17.888 "name": "raid_bdev1", 00:12:17.888 "aliases": [ 00:12:17.888 "4f36129d-f0f3-4e18-8f57-17d5ca102a41" 00:12:17.888 ], 00:12:17.888 "product_name": "Raid Volume", 00:12:17.888 "block_size": 512, 00:12:17.888 "num_blocks": 253952, 00:12:17.888 "uuid": "4f36129d-f0f3-4e18-8f57-17d5ca102a41", 00:12:17.888 "assigned_rate_limits": { 00:12:17.888 "rw_ios_per_sec": 0, 00:12:17.888 "rw_mbytes_per_sec": 0, 00:12:17.888 "r_mbytes_per_sec": 0, 00:12:17.888 "w_mbytes_per_sec": 0 00:12:17.888 }, 00:12:17.888 "claimed": false, 00:12:17.888 "zoned": false, 00:12:17.888 "supported_io_types": { 00:12:17.888 "read": true, 00:12:17.888 "write": true, 00:12:17.888 "unmap": true, 00:12:17.888 "flush": true, 00:12:17.888 "reset": true, 00:12:17.888 "nvme_admin": false, 00:12:17.888 "nvme_io": false, 00:12:17.888 "nvme_io_md": false, 00:12:17.888 "write_zeroes": true, 00:12:17.888 "zcopy": false, 00:12:17.888 "get_zone_info": false, 00:12:17.888 "zone_management": false, 00:12:17.888 "zone_append": false, 00:12:17.888 "compare": false, 00:12:17.888 "compare_and_write": false, 00:12:17.888 "abort": false, 00:12:17.888 "seek_hole": false, 00:12:17.888 "seek_data": false, 00:12:17.888 "copy": false, 00:12:17.888 "nvme_iov_md": false 00:12:17.888 }, 00:12:17.888 "memory_domains": [ 00:12:17.888 { 00:12:17.888 "dma_device_id": "system", 00:12:17.888 "dma_device_type": 1 00:12:17.888 }, 00:12:17.888 { 00:12:17.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.888 "dma_device_type": 2 00:12:17.888 }, 00:12:17.888 { 00:12:17.888 "dma_device_id": "system", 00:12:17.888 "dma_device_type": 1 00:12:17.888 }, 00:12:17.888 { 00:12:17.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.888 "dma_device_type": 2 00:12:17.888 }, 00:12:17.888 { 00:12:17.888 "dma_device_id": "system", 00:12:17.888 "dma_device_type": 1 00:12:17.888 }, 00:12:17.888 { 00:12:17.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.888 "dma_device_type": 2 00:12:17.888 }, 00:12:17.888 { 00:12:17.888 "dma_device_id": "system", 00:12:17.888 "dma_device_type": 1 00:12:17.888 }, 00:12:17.888 { 00:12:17.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.888 "dma_device_type": 2 00:12:17.888 } 00:12:17.888 ], 00:12:17.888 "driver_specific": { 00:12:17.888 "raid": { 00:12:17.888 "uuid": "4f36129d-f0f3-4e18-8f57-17d5ca102a41", 00:12:17.888 "strip_size_kb": 64, 00:12:17.888 "state": "online", 00:12:17.888 "raid_level": "raid0", 00:12:17.888 "superblock": true, 00:12:17.888 "num_base_bdevs": 4, 00:12:17.888 "num_base_bdevs_discovered": 4, 00:12:17.888 "num_base_bdevs_operational": 4, 00:12:17.888 "base_bdevs_list": [ 00:12:17.888 { 00:12:17.888 "name": "pt1", 00:12:17.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.888 "is_configured": true, 00:12:17.888 "data_offset": 2048, 00:12:17.888 "data_size": 63488 00:12:17.888 }, 00:12:17.888 { 00:12:17.888 "name": "pt2", 00:12:17.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.888 "is_configured": true, 00:12:17.888 "data_offset": 2048, 00:12:17.888 "data_size": 63488 00:12:17.888 }, 00:12:17.888 { 00:12:17.888 "name": "pt3", 00:12:17.888 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.888 "is_configured": true, 00:12:17.888 "data_offset": 2048, 00:12:17.888 "data_size": 63488 00:12:17.888 }, 00:12:17.888 { 00:12:17.888 "name": "pt4", 00:12:17.888 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.888 "is_configured": true, 00:12:17.888 "data_offset": 2048, 00:12:17.888 "data_size": 63488 00:12:17.888 } 00:12:17.888 ] 00:12:17.888 } 00:12:17.888 } 00:12:17.888 }' 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:17.888 pt2 00:12:17.888 pt3 00:12:17.888 pt4' 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.888 11:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.888 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.888 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.888 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.888 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.888 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.889 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.149 [2024-11-05 11:28:17.204600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4f36129d-f0f3-4e18-8f57-17d5ca102a41 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4f36129d-f0f3-4e18-8f57-17d5ca102a41 ']' 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.149 [2024-11-05 11:28:17.248217] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.149 [2024-11-05 11:28:17.248245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.149 [2024-11-05 11:28:17.248336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.149 [2024-11-05 11:28:17.248404] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.149 [2024-11-05 11:28:17.248418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.149 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.149 [2024-11-05 11:28:17.395999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:18.149 [2024-11-05 11:28:17.398204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:18.149 [2024-11-05 11:28:17.398321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:18.149 [2024-11-05 11:28:17.398367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:18.149 [2024-11-05 11:28:17.398427] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:18.149 [2024-11-05 11:28:17.398488] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:18.149 [2024-11-05 11:28:17.398511] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:18.149 [2024-11-05 11:28:17.398532] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:18.149 [2024-11-05 11:28:17.398548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.149 [2024-11-05 11:28:17.398564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:18.149 request: 00:12:18.149 { 00:12:18.149 "name": "raid_bdev1", 00:12:18.149 "raid_level": "raid0", 00:12:18.149 "base_bdevs": [ 00:12:18.149 "malloc1", 00:12:18.149 "malloc2", 00:12:18.149 "malloc3", 00:12:18.149 "malloc4" 00:12:18.149 ], 00:12:18.149 "strip_size_kb": 64, 00:12:18.149 "superblock": false, 00:12:18.149 "method": "bdev_raid_create", 00:12:18.149 "req_id": 1 00:12:18.149 } 00:12:18.149 Got JSON-RPC error response 00:12:18.149 response: 00:12:18.149 { 00:12:18.149 "code": -17, 00:12:18.149 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:18.149 } 00:12:18.150 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:18.150 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:18.150 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:18.150 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:18.150 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:18.150 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.150 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.150 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.150 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:18.150 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.409 [2024-11-05 11:28:17.463840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:18.409 [2024-11-05 11:28:17.463972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.409 [2024-11-05 11:28:17.464014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:18.409 [2024-11-05 11:28:17.464097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.409 [2024-11-05 11:28:17.466569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.409 [2024-11-05 11:28:17.466655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:18.409 [2024-11-05 11:28:17.466779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:18.409 [2024-11-05 11:28:17.466879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:18.409 pt1 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.409 "name": "raid_bdev1", 00:12:18.409 "uuid": "4f36129d-f0f3-4e18-8f57-17d5ca102a41", 00:12:18.409 "strip_size_kb": 64, 00:12:18.409 "state": "configuring", 00:12:18.409 "raid_level": "raid0", 00:12:18.409 "superblock": true, 00:12:18.409 "num_base_bdevs": 4, 00:12:18.409 "num_base_bdevs_discovered": 1, 00:12:18.409 "num_base_bdevs_operational": 4, 00:12:18.409 "base_bdevs_list": [ 00:12:18.409 { 00:12:18.409 "name": "pt1", 00:12:18.409 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.409 "is_configured": true, 00:12:18.409 "data_offset": 2048, 00:12:18.409 "data_size": 63488 00:12:18.409 }, 00:12:18.409 { 00:12:18.409 "name": null, 00:12:18.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.409 "is_configured": false, 00:12:18.409 "data_offset": 2048, 00:12:18.409 "data_size": 63488 00:12:18.409 }, 00:12:18.409 { 00:12:18.409 "name": null, 00:12:18.409 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.409 "is_configured": false, 00:12:18.409 "data_offset": 2048, 00:12:18.409 "data_size": 63488 00:12:18.409 }, 00:12:18.409 { 00:12:18.409 "name": null, 00:12:18.409 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:18.409 "is_configured": false, 00:12:18.409 "data_offset": 2048, 00:12:18.409 "data_size": 63488 00:12:18.409 } 00:12:18.409 ] 00:12:18.409 }' 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.409 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.683 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.684 [2024-11-05 11:28:17.911133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:18.684 [2024-11-05 11:28:17.911225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.684 [2024-11-05 11:28:17.911246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:18.684 [2024-11-05 11:28:17.911258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.684 [2024-11-05 11:28:17.911718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.684 [2024-11-05 11:28:17.911739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:18.684 [2024-11-05 11:28:17.911836] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:18.684 [2024-11-05 11:28:17.911861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:18.684 pt2 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.684 [2024-11-05 11:28:17.923110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.684 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.942 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.942 "name": "raid_bdev1", 00:12:18.942 "uuid": "4f36129d-f0f3-4e18-8f57-17d5ca102a41", 00:12:18.942 "strip_size_kb": 64, 00:12:18.942 "state": "configuring", 00:12:18.942 "raid_level": "raid0", 00:12:18.942 "superblock": true, 00:12:18.942 "num_base_bdevs": 4, 00:12:18.942 "num_base_bdevs_discovered": 1, 00:12:18.942 "num_base_bdevs_operational": 4, 00:12:18.942 "base_bdevs_list": [ 00:12:18.942 { 00:12:18.942 "name": "pt1", 00:12:18.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.942 "is_configured": true, 00:12:18.942 "data_offset": 2048, 00:12:18.942 "data_size": 63488 00:12:18.942 }, 00:12:18.942 { 00:12:18.942 "name": null, 00:12:18.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.942 "is_configured": false, 00:12:18.942 "data_offset": 0, 00:12:18.942 "data_size": 63488 00:12:18.942 }, 00:12:18.942 { 00:12:18.942 "name": null, 00:12:18.942 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.942 "is_configured": false, 00:12:18.942 "data_offset": 2048, 00:12:18.942 "data_size": 63488 00:12:18.942 }, 00:12:18.942 { 00:12:18.942 "name": null, 00:12:18.942 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:18.942 "is_configured": false, 00:12:18.942 "data_offset": 2048, 00:12:18.942 "data_size": 63488 00:12:18.942 } 00:12:18.942 ] 00:12:18.942 }' 00:12:18.942 11:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.942 11:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.201 [2024-11-05 11:28:18.378309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:19.201 [2024-11-05 11:28:18.378373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.201 [2024-11-05 11:28:18.378394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:19.201 [2024-11-05 11:28:18.378403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.201 [2024-11-05 11:28:18.378842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.201 [2024-11-05 11:28:18.378857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:19.201 [2024-11-05 11:28:18.378940] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:19.201 [2024-11-05 11:28:18.378968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:19.201 pt2 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.201 [2024-11-05 11:28:18.390256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:19.201 [2024-11-05 11:28:18.390305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.201 [2024-11-05 11:28:18.390328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:19.201 [2024-11-05 11:28:18.390338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.201 [2024-11-05 11:28:18.390687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.201 [2024-11-05 11:28:18.390702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:19.201 [2024-11-05 11:28:18.390760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:19.201 [2024-11-05 11:28:18.390775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:19.201 pt3 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.201 [2024-11-05 11:28:18.402219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:19.201 [2024-11-05 11:28:18.402265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.201 [2024-11-05 11:28:18.402282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:19.201 [2024-11-05 11:28:18.402289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.201 [2024-11-05 11:28:18.402630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.201 [2024-11-05 11:28:18.402644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:19.201 [2024-11-05 11:28:18.402702] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:19.201 [2024-11-05 11:28:18.402719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:19.201 [2024-11-05 11:28:18.402887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:19.201 [2024-11-05 11:28:18.402895] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:19.201 [2024-11-05 11:28:18.403166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:19.201 [2024-11-05 11:28:18.403314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:19.201 [2024-11-05 11:28:18.403332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:19.201 [2024-11-05 11:28:18.403453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.201 pt4 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:19.201 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.202 "name": "raid_bdev1", 00:12:19.202 "uuid": "4f36129d-f0f3-4e18-8f57-17d5ca102a41", 00:12:19.202 "strip_size_kb": 64, 00:12:19.202 "state": "online", 00:12:19.202 "raid_level": "raid0", 00:12:19.202 "superblock": true, 00:12:19.202 "num_base_bdevs": 4, 00:12:19.202 "num_base_bdevs_discovered": 4, 00:12:19.202 "num_base_bdevs_operational": 4, 00:12:19.202 "base_bdevs_list": [ 00:12:19.202 { 00:12:19.202 "name": "pt1", 00:12:19.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.202 "is_configured": true, 00:12:19.202 "data_offset": 2048, 00:12:19.202 "data_size": 63488 00:12:19.202 }, 00:12:19.202 { 00:12:19.202 "name": "pt2", 00:12:19.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.202 "is_configured": true, 00:12:19.202 "data_offset": 2048, 00:12:19.202 "data_size": 63488 00:12:19.202 }, 00:12:19.202 { 00:12:19.202 "name": "pt3", 00:12:19.202 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:19.202 "is_configured": true, 00:12:19.202 "data_offset": 2048, 00:12:19.202 "data_size": 63488 00:12:19.202 }, 00:12:19.202 { 00:12:19.202 "name": "pt4", 00:12:19.202 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:19.202 "is_configured": true, 00:12:19.202 "data_offset": 2048, 00:12:19.202 "data_size": 63488 00:12:19.202 } 00:12:19.202 ] 00:12:19.202 }' 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.202 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.770 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:19.770 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:19.770 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:19.770 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:19.770 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:19.770 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:19.770 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:19.770 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.771 [2024-11-05 11:28:18.825894] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:19.771 "name": "raid_bdev1", 00:12:19.771 "aliases": [ 00:12:19.771 "4f36129d-f0f3-4e18-8f57-17d5ca102a41" 00:12:19.771 ], 00:12:19.771 "product_name": "Raid Volume", 00:12:19.771 "block_size": 512, 00:12:19.771 "num_blocks": 253952, 00:12:19.771 "uuid": "4f36129d-f0f3-4e18-8f57-17d5ca102a41", 00:12:19.771 "assigned_rate_limits": { 00:12:19.771 "rw_ios_per_sec": 0, 00:12:19.771 "rw_mbytes_per_sec": 0, 00:12:19.771 "r_mbytes_per_sec": 0, 00:12:19.771 "w_mbytes_per_sec": 0 00:12:19.771 }, 00:12:19.771 "claimed": false, 00:12:19.771 "zoned": false, 00:12:19.771 "supported_io_types": { 00:12:19.771 "read": true, 00:12:19.771 "write": true, 00:12:19.771 "unmap": true, 00:12:19.771 "flush": true, 00:12:19.771 "reset": true, 00:12:19.771 "nvme_admin": false, 00:12:19.771 "nvme_io": false, 00:12:19.771 "nvme_io_md": false, 00:12:19.771 "write_zeroes": true, 00:12:19.771 "zcopy": false, 00:12:19.771 "get_zone_info": false, 00:12:19.771 "zone_management": false, 00:12:19.771 "zone_append": false, 00:12:19.771 "compare": false, 00:12:19.771 "compare_and_write": false, 00:12:19.771 "abort": false, 00:12:19.771 "seek_hole": false, 00:12:19.771 "seek_data": false, 00:12:19.771 "copy": false, 00:12:19.771 "nvme_iov_md": false 00:12:19.771 }, 00:12:19.771 "memory_domains": [ 00:12:19.771 { 00:12:19.771 "dma_device_id": "system", 00:12:19.771 "dma_device_type": 1 00:12:19.771 }, 00:12:19.771 { 00:12:19.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.771 "dma_device_type": 2 00:12:19.771 }, 00:12:19.771 { 00:12:19.771 "dma_device_id": "system", 00:12:19.771 "dma_device_type": 1 00:12:19.771 }, 00:12:19.771 { 00:12:19.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.771 "dma_device_type": 2 00:12:19.771 }, 00:12:19.771 { 00:12:19.771 "dma_device_id": "system", 00:12:19.771 "dma_device_type": 1 00:12:19.771 }, 00:12:19.771 { 00:12:19.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.771 "dma_device_type": 2 00:12:19.771 }, 00:12:19.771 { 00:12:19.771 "dma_device_id": "system", 00:12:19.771 "dma_device_type": 1 00:12:19.771 }, 00:12:19.771 { 00:12:19.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.771 "dma_device_type": 2 00:12:19.771 } 00:12:19.771 ], 00:12:19.771 "driver_specific": { 00:12:19.771 "raid": { 00:12:19.771 "uuid": "4f36129d-f0f3-4e18-8f57-17d5ca102a41", 00:12:19.771 "strip_size_kb": 64, 00:12:19.771 "state": "online", 00:12:19.771 "raid_level": "raid0", 00:12:19.771 "superblock": true, 00:12:19.771 "num_base_bdevs": 4, 00:12:19.771 "num_base_bdevs_discovered": 4, 00:12:19.771 "num_base_bdevs_operational": 4, 00:12:19.771 "base_bdevs_list": [ 00:12:19.771 { 00:12:19.771 "name": "pt1", 00:12:19.771 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.771 "is_configured": true, 00:12:19.771 "data_offset": 2048, 00:12:19.771 "data_size": 63488 00:12:19.771 }, 00:12:19.771 { 00:12:19.771 "name": "pt2", 00:12:19.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.771 "is_configured": true, 00:12:19.771 "data_offset": 2048, 00:12:19.771 "data_size": 63488 00:12:19.771 }, 00:12:19.771 { 00:12:19.771 "name": "pt3", 00:12:19.771 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:19.771 "is_configured": true, 00:12:19.771 "data_offset": 2048, 00:12:19.771 "data_size": 63488 00:12:19.771 }, 00:12:19.771 { 00:12:19.771 "name": "pt4", 00:12:19.771 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:19.771 "is_configured": true, 00:12:19.771 "data_offset": 2048, 00:12:19.771 "data_size": 63488 00:12:19.771 } 00:12:19.771 ] 00:12:19.771 } 00:12:19.771 } 00:12:19.771 }' 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:19.771 pt2 00:12:19.771 pt3 00:12:19.771 pt4' 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.771 11:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:19.771 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.771 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.771 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.771 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.771 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.771 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.031 [2024-11-05 11:28:19.157317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4f36129d-f0f3-4e18-8f57-17d5ca102a41 '!=' 4f36129d-f0f3-4e18-8f57-17d5ca102a41 ']' 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70849 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 70849 ']' 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 70849 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70849 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:20.031 killing process with pid 70849 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70849' 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 70849 00:12:20.031 [2024-11-05 11:28:19.231036] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.031 [2024-11-05 11:28:19.231142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.031 [2024-11-05 11:28:19.231216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.031 [2024-11-05 11:28:19.231226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:20.031 11:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 70849 00:12:20.600 [2024-11-05 11:28:19.617983] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.539 11:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:21.539 ************************************ 00:12:21.539 END TEST raid_superblock_test 00:12:21.539 ************************************ 00:12:21.539 00:12:21.539 real 0m5.426s 00:12:21.539 user 0m7.789s 00:12:21.539 sys 0m0.963s 00:12:21.539 11:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:21.539 11:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.539 11:28:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:21.539 11:28:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:21.539 11:28:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:21.539 11:28:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.539 ************************************ 00:12:21.539 START TEST raid_read_error_test 00:12:21.539 ************************************ 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vC3eHcQUFx 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71111 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71111 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 71111 ']' 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:21.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:21.539 11:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.799 [2024-11-05 11:28:20.877198] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:12:21.799 [2024-11-05 11:28:20.877338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71111 ] 00:12:21.799 [2024-11-05 11:28:21.052608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.059 [2024-11-05 11:28:21.167650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.319 [2024-11-05 11:28:21.367943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.319 [2024-11-05 11:28:21.368016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.578 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:22.578 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:22.578 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:22.578 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:22.578 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.578 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.578 BaseBdev1_malloc 00:12:22.578 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.578 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:22.578 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.579 true 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.579 [2024-11-05 11:28:21.778347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:22.579 [2024-11-05 11:28:21.778419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.579 [2024-11-05 11:28:21.778440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:22.579 [2024-11-05 11:28:21.778450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.579 [2024-11-05 11:28:21.780503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.579 [2024-11-05 11:28:21.780543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:22.579 BaseBdev1 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.579 BaseBdev2_malloc 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.579 true 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.579 [2024-11-05 11:28:21.844646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:22.579 [2024-11-05 11:28:21.844702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.579 [2024-11-05 11:28:21.844720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:22.579 [2024-11-05 11:28:21.844730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.579 [2024-11-05 11:28:21.846761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.579 [2024-11-05 11:28:21.846801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:22.579 BaseBdev2 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.579 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.839 BaseBdev3_malloc 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.839 true 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.839 [2024-11-05 11:28:21.922561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:22.839 [2024-11-05 11:28:21.922632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.839 [2024-11-05 11:28:21.922651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:22.839 [2024-11-05 11:28:21.922662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.839 [2024-11-05 11:28:21.924773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.839 [2024-11-05 11:28:21.924813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:22.839 BaseBdev3 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.839 BaseBdev4_malloc 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.839 true 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.839 [2024-11-05 11:28:21.988949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:22.839 [2024-11-05 11:28:21.989007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.839 [2024-11-05 11:28:21.989026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:22.839 [2024-11-05 11:28:21.989036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.839 [2024-11-05 11:28:21.991101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.839 [2024-11-05 11:28:21.991150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:22.839 BaseBdev4 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.839 11:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.839 [2024-11-05 11:28:22.000993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:22.839 [2024-11-05 11:28:22.002752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.839 [2024-11-05 11:28:22.002830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:22.839 [2024-11-05 11:28:22.002895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:22.839 [2024-11-05 11:28:22.003155] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:22.839 [2024-11-05 11:28:22.003176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:22.839 [2024-11-05 11:28:22.003425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:22.839 [2024-11-05 11:28:22.003595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:22.839 [2024-11-05 11:28:22.003610] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:22.839 [2024-11-05 11:28:22.003775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.839 "name": "raid_bdev1", 00:12:22.839 "uuid": "63d1de10-de22-48a0-a780-6eb449df0f7a", 00:12:22.839 "strip_size_kb": 64, 00:12:22.839 "state": "online", 00:12:22.839 "raid_level": "raid0", 00:12:22.839 "superblock": true, 00:12:22.839 "num_base_bdevs": 4, 00:12:22.839 "num_base_bdevs_discovered": 4, 00:12:22.839 "num_base_bdevs_operational": 4, 00:12:22.839 "base_bdevs_list": [ 00:12:22.839 { 00:12:22.839 "name": "BaseBdev1", 00:12:22.839 "uuid": "56943017-c68b-51cb-8e52-591e7d862a74", 00:12:22.839 "is_configured": true, 00:12:22.839 "data_offset": 2048, 00:12:22.839 "data_size": 63488 00:12:22.839 }, 00:12:22.839 { 00:12:22.839 "name": "BaseBdev2", 00:12:22.839 "uuid": "ad2145ad-8fb5-5bd5-8987-d0500a393706", 00:12:22.839 "is_configured": true, 00:12:22.839 "data_offset": 2048, 00:12:22.839 "data_size": 63488 00:12:22.839 }, 00:12:22.839 { 00:12:22.839 "name": "BaseBdev3", 00:12:22.839 "uuid": "d49481ef-7f71-5538-8eea-c058ad60829e", 00:12:22.839 "is_configured": true, 00:12:22.839 "data_offset": 2048, 00:12:22.839 "data_size": 63488 00:12:22.839 }, 00:12:22.839 { 00:12:22.839 "name": "BaseBdev4", 00:12:22.839 "uuid": "2f84485c-6a0c-5845-b78a-cec8040a892b", 00:12:22.839 "is_configured": true, 00:12:22.839 "data_offset": 2048, 00:12:22.839 "data_size": 63488 00:12:22.839 } 00:12:22.839 ] 00:12:22.839 }' 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.839 11:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.465 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:23.465 11:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:23.465 [2024-11-05 11:28:22.525243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.404 "name": "raid_bdev1", 00:12:24.404 "uuid": "63d1de10-de22-48a0-a780-6eb449df0f7a", 00:12:24.404 "strip_size_kb": 64, 00:12:24.404 "state": "online", 00:12:24.404 "raid_level": "raid0", 00:12:24.404 "superblock": true, 00:12:24.404 "num_base_bdevs": 4, 00:12:24.404 "num_base_bdevs_discovered": 4, 00:12:24.404 "num_base_bdevs_operational": 4, 00:12:24.404 "base_bdevs_list": [ 00:12:24.404 { 00:12:24.404 "name": "BaseBdev1", 00:12:24.404 "uuid": "56943017-c68b-51cb-8e52-591e7d862a74", 00:12:24.404 "is_configured": true, 00:12:24.404 "data_offset": 2048, 00:12:24.404 "data_size": 63488 00:12:24.404 }, 00:12:24.404 { 00:12:24.404 "name": "BaseBdev2", 00:12:24.404 "uuid": "ad2145ad-8fb5-5bd5-8987-d0500a393706", 00:12:24.404 "is_configured": true, 00:12:24.404 "data_offset": 2048, 00:12:24.404 "data_size": 63488 00:12:24.404 }, 00:12:24.404 { 00:12:24.404 "name": "BaseBdev3", 00:12:24.404 "uuid": "d49481ef-7f71-5538-8eea-c058ad60829e", 00:12:24.404 "is_configured": true, 00:12:24.404 "data_offset": 2048, 00:12:24.404 "data_size": 63488 00:12:24.404 }, 00:12:24.404 { 00:12:24.404 "name": "BaseBdev4", 00:12:24.404 "uuid": "2f84485c-6a0c-5845-b78a-cec8040a892b", 00:12:24.404 "is_configured": true, 00:12:24.404 "data_offset": 2048, 00:12:24.404 "data_size": 63488 00:12:24.404 } 00:12:24.404 ] 00:12:24.404 }' 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.404 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.664 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:24.664 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.664 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.664 [2024-11-05 11:28:23.913821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:24.664 [2024-11-05 11:28:23.913859] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.664 [2024-11-05 11:28:23.916683] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.664 [2024-11-05 11:28:23.916746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.664 [2024-11-05 11:28:23.916790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.664 [2024-11-05 11:28:23.916803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:24.664 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.664 { 00:12:24.664 "results": [ 00:12:24.664 { 00:12:24.664 "job": "raid_bdev1", 00:12:24.664 "core_mask": "0x1", 00:12:24.664 "workload": "randrw", 00:12:24.664 "percentage": 50, 00:12:24.664 "status": "finished", 00:12:24.664 "queue_depth": 1, 00:12:24.664 "io_size": 131072, 00:12:24.664 "runtime": 1.389391, 00:12:24.664 "iops": 15757.983173922963, 00:12:24.664 "mibps": 1969.7478967403704, 00:12:24.664 "io_failed": 1, 00:12:24.664 "io_timeout": 0, 00:12:24.664 "avg_latency_us": 88.31178867780027, 00:12:24.664 "min_latency_us": 25.152838427947597, 00:12:24.664 "max_latency_us": 1445.2262008733624 00:12:24.664 } 00:12:24.664 ], 00:12:24.664 "core_count": 1 00:12:24.664 } 00:12:24.664 11:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71111 00:12:24.664 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 71111 ']' 00:12:24.665 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 71111 00:12:24.665 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:12:24.665 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:24.665 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71111 00:12:24.924 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:24.924 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:24.924 killing process with pid 71111 00:12:24.924 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71111' 00:12:24.924 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 71111 00:12:24.924 [2024-11-05 11:28:23.964820] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:24.924 11:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 71111 00:12:25.183 [2024-11-05 11:28:24.287857] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:26.563 11:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:26.563 11:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vC3eHcQUFx 00:12:26.563 11:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:26.563 11:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:26.563 11:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:26.563 11:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:26.563 11:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:26.563 11:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:26.563 00:12:26.563 real 0m4.679s 00:12:26.563 user 0m5.550s 00:12:26.563 sys 0m0.568s 00:12:26.563 11:28:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:26.563 11:28:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.563 ************************************ 00:12:26.563 END TEST raid_read_error_test 00:12:26.563 ************************************ 00:12:26.563 11:28:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:12:26.563 11:28:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:26.563 11:28:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:26.563 11:28:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:26.563 ************************************ 00:12:26.563 START TEST raid_write_error_test 00:12:26.563 ************************************ 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8JXfUq0OXV 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71262 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71262 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 71262 ']' 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:26.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:26.563 11:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.563 [2024-11-05 11:28:25.628934] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:12:26.563 [2024-11-05 11:28:25.629064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71262 ] 00:12:26.563 [2024-11-05 11:28:25.804591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.822 [2024-11-05 11:28:25.917232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.082 [2024-11-05 11:28:26.114387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.082 [2024-11-05 11:28:26.114459] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 BaseBdev1_malloc 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 true 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 [2024-11-05 11:28:26.523139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:27.341 [2024-11-05 11:28:26.523205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.341 [2024-11-05 11:28:26.523227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:27.341 [2024-11-05 11:28:26.523239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.341 [2024-11-05 11:28:26.525493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.341 [2024-11-05 11:28:26.525535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:27.341 BaseBdev1 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 BaseBdev2_malloc 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 true 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.341 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 [2024-11-05 11:28:26.593377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:27.341 [2024-11-05 11:28:26.593450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.341 [2024-11-05 11:28:26.593470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:27.341 [2024-11-05 11:28:26.593482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.342 [2024-11-05 11:28:26.595854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.342 [2024-11-05 11:28:26.595902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:27.342 BaseBdev2 00:12:27.342 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.342 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.342 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:27.342 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.342 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.601 BaseBdev3_malloc 00:12:27.601 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.601 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:27.601 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.601 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.601 true 00:12:27.601 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.601 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:27.601 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.601 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.601 [2024-11-05 11:28:26.676050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:27.601 [2024-11-05 11:28:26.676121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.601 [2024-11-05 11:28:26.676166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:27.601 [2024-11-05 11:28:26.676179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.601 [2024-11-05 11:28:26.678448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.601 [2024-11-05 11:28:26.678490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:27.601 BaseBdev3 00:12:27.601 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.601 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.601 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:27.601 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.601 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.601 BaseBdev4_malloc 00:12:27.601 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.602 true 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.602 [2024-11-05 11:28:26.741094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:27.602 [2024-11-05 11:28:26.741158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.602 [2024-11-05 11:28:26.741176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:27.602 [2024-11-05 11:28:26.741187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.602 [2024-11-05 11:28:26.743303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.602 [2024-11-05 11:28:26.743347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:27.602 BaseBdev4 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.602 [2024-11-05 11:28:26.753129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.602 [2024-11-05 11:28:26.754884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.602 [2024-11-05 11:28:26.754980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:27.602 [2024-11-05 11:28:26.755065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:27.602 [2024-11-05 11:28:26.755297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:27.602 [2024-11-05 11:28:26.755321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:27.602 [2024-11-05 11:28:26.755571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:27.602 [2024-11-05 11:28:26.755742] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:27.602 [2024-11-05 11:28:26.755756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:27.602 [2024-11-05 11:28:26.755917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.602 "name": "raid_bdev1", 00:12:27.602 "uuid": "5a2939df-06ab-4048-a14c-0580b4a27036", 00:12:27.602 "strip_size_kb": 64, 00:12:27.602 "state": "online", 00:12:27.602 "raid_level": "raid0", 00:12:27.602 "superblock": true, 00:12:27.602 "num_base_bdevs": 4, 00:12:27.602 "num_base_bdevs_discovered": 4, 00:12:27.602 "num_base_bdevs_operational": 4, 00:12:27.602 "base_bdevs_list": [ 00:12:27.602 { 00:12:27.602 "name": "BaseBdev1", 00:12:27.602 "uuid": "dac265c9-7dec-5657-86cb-a063e4a79eca", 00:12:27.602 "is_configured": true, 00:12:27.602 "data_offset": 2048, 00:12:27.602 "data_size": 63488 00:12:27.602 }, 00:12:27.602 { 00:12:27.602 "name": "BaseBdev2", 00:12:27.602 "uuid": "4ed8ee50-bfed-5cdc-b79a-c931769b33d1", 00:12:27.602 "is_configured": true, 00:12:27.602 "data_offset": 2048, 00:12:27.602 "data_size": 63488 00:12:27.602 }, 00:12:27.602 { 00:12:27.602 "name": "BaseBdev3", 00:12:27.602 "uuid": "38220001-12bd-586b-b53c-9ce4208a0189", 00:12:27.602 "is_configured": true, 00:12:27.602 "data_offset": 2048, 00:12:27.602 "data_size": 63488 00:12:27.602 }, 00:12:27.602 { 00:12:27.602 "name": "BaseBdev4", 00:12:27.602 "uuid": "610a4e5e-752e-50d3-9d55-b6ddc44f973a", 00:12:27.602 "is_configured": true, 00:12:27.602 "data_offset": 2048, 00:12:27.602 "data_size": 63488 00:12:27.602 } 00:12:27.602 ] 00:12:27.602 }' 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.602 11:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.169 11:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:28.169 11:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:28.169 [2024-11-05 11:28:27.281439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.104 "name": "raid_bdev1", 00:12:29.104 "uuid": "5a2939df-06ab-4048-a14c-0580b4a27036", 00:12:29.104 "strip_size_kb": 64, 00:12:29.104 "state": "online", 00:12:29.104 "raid_level": "raid0", 00:12:29.104 "superblock": true, 00:12:29.104 "num_base_bdevs": 4, 00:12:29.104 "num_base_bdevs_discovered": 4, 00:12:29.104 "num_base_bdevs_operational": 4, 00:12:29.104 "base_bdevs_list": [ 00:12:29.104 { 00:12:29.104 "name": "BaseBdev1", 00:12:29.104 "uuid": "dac265c9-7dec-5657-86cb-a063e4a79eca", 00:12:29.104 "is_configured": true, 00:12:29.104 "data_offset": 2048, 00:12:29.104 "data_size": 63488 00:12:29.104 }, 00:12:29.104 { 00:12:29.104 "name": "BaseBdev2", 00:12:29.104 "uuid": "4ed8ee50-bfed-5cdc-b79a-c931769b33d1", 00:12:29.104 "is_configured": true, 00:12:29.104 "data_offset": 2048, 00:12:29.104 "data_size": 63488 00:12:29.104 }, 00:12:29.104 { 00:12:29.104 "name": "BaseBdev3", 00:12:29.104 "uuid": "38220001-12bd-586b-b53c-9ce4208a0189", 00:12:29.104 "is_configured": true, 00:12:29.104 "data_offset": 2048, 00:12:29.104 "data_size": 63488 00:12:29.104 }, 00:12:29.104 { 00:12:29.104 "name": "BaseBdev4", 00:12:29.104 "uuid": "610a4e5e-752e-50d3-9d55-b6ddc44f973a", 00:12:29.104 "is_configured": true, 00:12:29.104 "data_offset": 2048, 00:12:29.104 "data_size": 63488 00:12:29.104 } 00:12:29.104 ] 00:12:29.104 }' 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.104 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.671 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.671 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.671 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.671 [2024-11-05 11:28:28.693123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.671 [2024-11-05 11:28:28.693175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.671 [2024-11-05 11:28:28.695834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.671 [2024-11-05 11:28:28.695912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.671 [2024-11-05 11:28:28.695963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.671 [2024-11-05 11:28:28.695976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:29.671 { 00:12:29.671 "results": [ 00:12:29.671 { 00:12:29.671 "job": "raid_bdev1", 00:12:29.671 "core_mask": "0x1", 00:12:29.671 "workload": "randrw", 00:12:29.671 "percentage": 50, 00:12:29.671 "status": "finished", 00:12:29.671 "queue_depth": 1, 00:12:29.671 "io_size": 131072, 00:12:29.671 "runtime": 1.412527, 00:12:29.671 "iops": 15727.840954544587, 00:12:29.671 "mibps": 1965.9801193180733, 00:12:29.671 "io_failed": 1, 00:12:29.671 "io_timeout": 0, 00:12:29.671 "avg_latency_us": 88.49204132403429, 00:12:29.671 "min_latency_us": 26.494323144104804, 00:12:29.671 "max_latency_us": 1366.5257641921398 00:12:29.671 } 00:12:29.671 ], 00:12:29.671 "core_count": 1 00:12:29.671 } 00:12:29.671 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.671 11:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71262 00:12:29.671 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 71262 ']' 00:12:29.671 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 71262 00:12:29.671 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:29.671 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:29.671 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71262 00:12:29.671 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:29.671 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:29.671 killing process with pid 71262 00:12:29.671 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71262' 00:12:29.671 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 71262 00:12:29.671 [2024-11-05 11:28:28.743796] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.671 11:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 71262 00:12:29.929 [2024-11-05 11:28:29.071860] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:31.307 11:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8JXfUq0OXV 00:12:31.307 11:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:31.307 11:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:31.307 11:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:31.307 11:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:31.307 11:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:31.307 11:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:31.307 11:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:31.307 00:12:31.307 real 0m4.749s 00:12:31.307 user 0m5.621s 00:12:31.307 sys 0m0.587s 00:12:31.307 11:28:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:31.307 11:28:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.307 ************************************ 00:12:31.307 END TEST raid_write_error_test 00:12:31.307 ************************************ 00:12:31.307 11:28:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:31.307 11:28:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:31.307 11:28:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:31.307 11:28:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:31.307 11:28:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.307 ************************************ 00:12:31.307 START TEST raid_state_function_test 00:12:31.307 ************************************ 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71400 00:12:31.307 Process raid pid: 71400 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71400' 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71400 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71400 ']' 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:31.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:31.307 11:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.307 [2024-11-05 11:28:30.438118] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:12:31.307 [2024-11-05 11:28:30.438272] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.567 [2024-11-05 11:28:30.616644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.567 [2024-11-05 11:28:30.727234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.827 [2024-11-05 11:28:30.928277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.827 [2024-11-05 11:28:30.928311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.087 [2024-11-05 11:28:31.269978] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:32.087 [2024-11-05 11:28:31.270036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:32.087 [2024-11-05 11:28:31.270049] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.087 [2024-11-05 11:28:31.270059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.087 [2024-11-05 11:28:31.270064] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:32.087 [2024-11-05 11:28:31.270074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:32.087 [2024-11-05 11:28:31.270082] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:32.087 [2024-11-05 11:28:31.270091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.087 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.087 "name": "Existed_Raid", 00:12:32.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.087 "strip_size_kb": 64, 00:12:32.087 "state": "configuring", 00:12:32.087 "raid_level": "concat", 00:12:32.087 "superblock": false, 00:12:32.087 "num_base_bdevs": 4, 00:12:32.087 "num_base_bdevs_discovered": 0, 00:12:32.087 "num_base_bdevs_operational": 4, 00:12:32.087 "base_bdevs_list": [ 00:12:32.087 { 00:12:32.087 "name": "BaseBdev1", 00:12:32.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.087 "is_configured": false, 00:12:32.087 "data_offset": 0, 00:12:32.087 "data_size": 0 00:12:32.087 }, 00:12:32.087 { 00:12:32.087 "name": "BaseBdev2", 00:12:32.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.087 "is_configured": false, 00:12:32.087 "data_offset": 0, 00:12:32.087 "data_size": 0 00:12:32.087 }, 00:12:32.087 { 00:12:32.088 "name": "BaseBdev3", 00:12:32.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.088 "is_configured": false, 00:12:32.088 "data_offset": 0, 00:12:32.088 "data_size": 0 00:12:32.088 }, 00:12:32.088 { 00:12:32.088 "name": "BaseBdev4", 00:12:32.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.088 "is_configured": false, 00:12:32.088 "data_offset": 0, 00:12:32.088 "data_size": 0 00:12:32.088 } 00:12:32.088 ] 00:12:32.088 }' 00:12:32.088 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.088 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.657 [2024-11-05 11:28:31.697196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:32.657 [2024-11-05 11:28:31.697238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.657 [2024-11-05 11:28:31.705173] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:32.657 [2024-11-05 11:28:31.705213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:32.657 [2024-11-05 11:28:31.705222] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.657 [2024-11-05 11:28:31.705231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.657 [2024-11-05 11:28:31.705237] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:32.657 [2024-11-05 11:28:31.705245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:32.657 [2024-11-05 11:28:31.705251] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:32.657 [2024-11-05 11:28:31.705260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.657 [2024-11-05 11:28:31.751063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.657 BaseBdev1 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.657 [ 00:12:32.657 { 00:12:32.657 "name": "BaseBdev1", 00:12:32.657 "aliases": [ 00:12:32.657 "f1525a4f-7dd5-40f9-a3e6-97eb821bddbd" 00:12:32.657 ], 00:12:32.657 "product_name": "Malloc disk", 00:12:32.657 "block_size": 512, 00:12:32.657 "num_blocks": 65536, 00:12:32.657 "uuid": "f1525a4f-7dd5-40f9-a3e6-97eb821bddbd", 00:12:32.657 "assigned_rate_limits": { 00:12:32.657 "rw_ios_per_sec": 0, 00:12:32.657 "rw_mbytes_per_sec": 0, 00:12:32.657 "r_mbytes_per_sec": 0, 00:12:32.657 "w_mbytes_per_sec": 0 00:12:32.657 }, 00:12:32.657 "claimed": true, 00:12:32.657 "claim_type": "exclusive_write", 00:12:32.657 "zoned": false, 00:12:32.657 "supported_io_types": { 00:12:32.657 "read": true, 00:12:32.657 "write": true, 00:12:32.657 "unmap": true, 00:12:32.657 "flush": true, 00:12:32.657 "reset": true, 00:12:32.657 "nvme_admin": false, 00:12:32.657 "nvme_io": false, 00:12:32.657 "nvme_io_md": false, 00:12:32.657 "write_zeroes": true, 00:12:32.657 "zcopy": true, 00:12:32.657 "get_zone_info": false, 00:12:32.657 "zone_management": false, 00:12:32.657 "zone_append": false, 00:12:32.657 "compare": false, 00:12:32.657 "compare_and_write": false, 00:12:32.657 "abort": true, 00:12:32.657 "seek_hole": false, 00:12:32.657 "seek_data": false, 00:12:32.657 "copy": true, 00:12:32.657 "nvme_iov_md": false 00:12:32.657 }, 00:12:32.657 "memory_domains": [ 00:12:32.657 { 00:12:32.657 "dma_device_id": "system", 00:12:32.657 "dma_device_type": 1 00:12:32.657 }, 00:12:32.657 { 00:12:32.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.657 "dma_device_type": 2 00:12:32.657 } 00:12:32.657 ], 00:12:32.657 "driver_specific": {} 00:12:32.657 } 00:12:32.657 ] 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.657 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.657 "name": "Existed_Raid", 00:12:32.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.657 "strip_size_kb": 64, 00:12:32.657 "state": "configuring", 00:12:32.657 "raid_level": "concat", 00:12:32.657 "superblock": false, 00:12:32.657 "num_base_bdevs": 4, 00:12:32.657 "num_base_bdevs_discovered": 1, 00:12:32.657 "num_base_bdevs_operational": 4, 00:12:32.658 "base_bdevs_list": [ 00:12:32.658 { 00:12:32.658 "name": "BaseBdev1", 00:12:32.658 "uuid": "f1525a4f-7dd5-40f9-a3e6-97eb821bddbd", 00:12:32.658 "is_configured": true, 00:12:32.658 "data_offset": 0, 00:12:32.658 "data_size": 65536 00:12:32.658 }, 00:12:32.658 { 00:12:32.658 "name": "BaseBdev2", 00:12:32.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.658 "is_configured": false, 00:12:32.658 "data_offset": 0, 00:12:32.658 "data_size": 0 00:12:32.658 }, 00:12:32.658 { 00:12:32.658 "name": "BaseBdev3", 00:12:32.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.658 "is_configured": false, 00:12:32.658 "data_offset": 0, 00:12:32.658 "data_size": 0 00:12:32.658 }, 00:12:32.658 { 00:12:32.658 "name": "BaseBdev4", 00:12:32.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.658 "is_configured": false, 00:12:32.658 "data_offset": 0, 00:12:32.658 "data_size": 0 00:12:32.658 } 00:12:32.658 ] 00:12:32.658 }' 00:12:32.658 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.658 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.228 [2024-11-05 11:28:32.226271] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:33.228 [2024-11-05 11:28:32.226332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.228 [2024-11-05 11:28:32.234326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.228 [2024-11-05 11:28:32.236124] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:33.228 [2024-11-05 11:28:32.236177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:33.228 [2024-11-05 11:28:32.236187] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:33.228 [2024-11-05 11:28:32.236198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:33.228 [2024-11-05 11:28:32.236205] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:33.228 [2024-11-05 11:28:32.236212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.228 "name": "Existed_Raid", 00:12:33.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.228 "strip_size_kb": 64, 00:12:33.228 "state": "configuring", 00:12:33.228 "raid_level": "concat", 00:12:33.228 "superblock": false, 00:12:33.228 "num_base_bdevs": 4, 00:12:33.228 "num_base_bdevs_discovered": 1, 00:12:33.228 "num_base_bdevs_operational": 4, 00:12:33.228 "base_bdevs_list": [ 00:12:33.228 { 00:12:33.228 "name": "BaseBdev1", 00:12:33.228 "uuid": "f1525a4f-7dd5-40f9-a3e6-97eb821bddbd", 00:12:33.228 "is_configured": true, 00:12:33.228 "data_offset": 0, 00:12:33.228 "data_size": 65536 00:12:33.228 }, 00:12:33.228 { 00:12:33.228 "name": "BaseBdev2", 00:12:33.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.228 "is_configured": false, 00:12:33.228 "data_offset": 0, 00:12:33.228 "data_size": 0 00:12:33.228 }, 00:12:33.228 { 00:12:33.228 "name": "BaseBdev3", 00:12:33.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.228 "is_configured": false, 00:12:33.228 "data_offset": 0, 00:12:33.228 "data_size": 0 00:12:33.228 }, 00:12:33.228 { 00:12:33.228 "name": "BaseBdev4", 00:12:33.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.228 "is_configured": false, 00:12:33.228 "data_offset": 0, 00:12:33.228 "data_size": 0 00:12:33.228 } 00:12:33.228 ] 00:12:33.228 }' 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.228 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.488 [2024-11-05 11:28:32.687179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.488 BaseBdev2 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.488 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.488 [ 00:12:33.488 { 00:12:33.488 "name": "BaseBdev2", 00:12:33.488 "aliases": [ 00:12:33.488 "b61a5e4e-2c11-4cba-bbc2-337084655a4e" 00:12:33.488 ], 00:12:33.488 "product_name": "Malloc disk", 00:12:33.488 "block_size": 512, 00:12:33.488 "num_blocks": 65536, 00:12:33.488 "uuid": "b61a5e4e-2c11-4cba-bbc2-337084655a4e", 00:12:33.488 "assigned_rate_limits": { 00:12:33.488 "rw_ios_per_sec": 0, 00:12:33.488 "rw_mbytes_per_sec": 0, 00:12:33.488 "r_mbytes_per_sec": 0, 00:12:33.488 "w_mbytes_per_sec": 0 00:12:33.488 }, 00:12:33.488 "claimed": true, 00:12:33.488 "claim_type": "exclusive_write", 00:12:33.488 "zoned": false, 00:12:33.488 "supported_io_types": { 00:12:33.488 "read": true, 00:12:33.488 "write": true, 00:12:33.488 "unmap": true, 00:12:33.488 "flush": true, 00:12:33.488 "reset": true, 00:12:33.488 "nvme_admin": false, 00:12:33.488 "nvme_io": false, 00:12:33.488 "nvme_io_md": false, 00:12:33.488 "write_zeroes": true, 00:12:33.488 "zcopy": true, 00:12:33.488 "get_zone_info": false, 00:12:33.488 "zone_management": false, 00:12:33.488 "zone_append": false, 00:12:33.488 "compare": false, 00:12:33.488 "compare_and_write": false, 00:12:33.488 "abort": true, 00:12:33.488 "seek_hole": false, 00:12:33.488 "seek_data": false, 00:12:33.488 "copy": true, 00:12:33.488 "nvme_iov_md": false 00:12:33.488 }, 00:12:33.488 "memory_domains": [ 00:12:33.488 { 00:12:33.488 "dma_device_id": "system", 00:12:33.488 "dma_device_type": 1 00:12:33.488 }, 00:12:33.488 { 00:12:33.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.489 "dma_device_type": 2 00:12:33.489 } 00:12:33.489 ], 00:12:33.489 "driver_specific": {} 00:12:33.489 } 00:12:33.489 ] 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.489 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.749 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.749 "name": "Existed_Raid", 00:12:33.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.749 "strip_size_kb": 64, 00:12:33.749 "state": "configuring", 00:12:33.749 "raid_level": "concat", 00:12:33.749 "superblock": false, 00:12:33.749 "num_base_bdevs": 4, 00:12:33.749 "num_base_bdevs_discovered": 2, 00:12:33.749 "num_base_bdevs_operational": 4, 00:12:33.749 "base_bdevs_list": [ 00:12:33.749 { 00:12:33.749 "name": "BaseBdev1", 00:12:33.749 "uuid": "f1525a4f-7dd5-40f9-a3e6-97eb821bddbd", 00:12:33.749 "is_configured": true, 00:12:33.749 "data_offset": 0, 00:12:33.749 "data_size": 65536 00:12:33.749 }, 00:12:33.749 { 00:12:33.749 "name": "BaseBdev2", 00:12:33.749 "uuid": "b61a5e4e-2c11-4cba-bbc2-337084655a4e", 00:12:33.749 "is_configured": true, 00:12:33.749 "data_offset": 0, 00:12:33.749 "data_size": 65536 00:12:33.749 }, 00:12:33.749 { 00:12:33.749 "name": "BaseBdev3", 00:12:33.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.749 "is_configured": false, 00:12:33.749 "data_offset": 0, 00:12:33.749 "data_size": 0 00:12:33.749 }, 00:12:33.749 { 00:12:33.749 "name": "BaseBdev4", 00:12:33.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.749 "is_configured": false, 00:12:33.749 "data_offset": 0, 00:12:33.749 "data_size": 0 00:12:33.749 } 00:12:33.749 ] 00:12:33.749 }' 00:12:33.749 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.749 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.008 [2024-11-05 11:28:33.175247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.008 BaseBdev3 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.008 [ 00:12:34.008 { 00:12:34.008 "name": "BaseBdev3", 00:12:34.008 "aliases": [ 00:12:34.008 "10fc95ca-71b1-46d6-9721-67c0041fa87d" 00:12:34.008 ], 00:12:34.008 "product_name": "Malloc disk", 00:12:34.008 "block_size": 512, 00:12:34.008 "num_blocks": 65536, 00:12:34.008 "uuid": "10fc95ca-71b1-46d6-9721-67c0041fa87d", 00:12:34.008 "assigned_rate_limits": { 00:12:34.008 "rw_ios_per_sec": 0, 00:12:34.008 "rw_mbytes_per_sec": 0, 00:12:34.008 "r_mbytes_per_sec": 0, 00:12:34.008 "w_mbytes_per_sec": 0 00:12:34.008 }, 00:12:34.008 "claimed": true, 00:12:34.008 "claim_type": "exclusive_write", 00:12:34.008 "zoned": false, 00:12:34.008 "supported_io_types": { 00:12:34.008 "read": true, 00:12:34.008 "write": true, 00:12:34.008 "unmap": true, 00:12:34.008 "flush": true, 00:12:34.008 "reset": true, 00:12:34.008 "nvme_admin": false, 00:12:34.008 "nvme_io": false, 00:12:34.008 "nvme_io_md": false, 00:12:34.008 "write_zeroes": true, 00:12:34.008 "zcopy": true, 00:12:34.008 "get_zone_info": false, 00:12:34.008 "zone_management": false, 00:12:34.008 "zone_append": false, 00:12:34.008 "compare": false, 00:12:34.008 "compare_and_write": false, 00:12:34.008 "abort": true, 00:12:34.008 "seek_hole": false, 00:12:34.008 "seek_data": false, 00:12:34.008 "copy": true, 00:12:34.008 "nvme_iov_md": false 00:12:34.008 }, 00:12:34.008 "memory_domains": [ 00:12:34.008 { 00:12:34.008 "dma_device_id": "system", 00:12:34.008 "dma_device_type": 1 00:12:34.008 }, 00:12:34.008 { 00:12:34.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.008 "dma_device_type": 2 00:12:34.008 } 00:12:34.008 ], 00:12:34.008 "driver_specific": {} 00:12:34.008 } 00:12:34.008 ] 00:12:34.008 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.009 "name": "Existed_Raid", 00:12:34.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.009 "strip_size_kb": 64, 00:12:34.009 "state": "configuring", 00:12:34.009 "raid_level": "concat", 00:12:34.009 "superblock": false, 00:12:34.009 "num_base_bdevs": 4, 00:12:34.009 "num_base_bdevs_discovered": 3, 00:12:34.009 "num_base_bdevs_operational": 4, 00:12:34.009 "base_bdevs_list": [ 00:12:34.009 { 00:12:34.009 "name": "BaseBdev1", 00:12:34.009 "uuid": "f1525a4f-7dd5-40f9-a3e6-97eb821bddbd", 00:12:34.009 "is_configured": true, 00:12:34.009 "data_offset": 0, 00:12:34.009 "data_size": 65536 00:12:34.009 }, 00:12:34.009 { 00:12:34.009 "name": "BaseBdev2", 00:12:34.009 "uuid": "b61a5e4e-2c11-4cba-bbc2-337084655a4e", 00:12:34.009 "is_configured": true, 00:12:34.009 "data_offset": 0, 00:12:34.009 "data_size": 65536 00:12:34.009 }, 00:12:34.009 { 00:12:34.009 "name": "BaseBdev3", 00:12:34.009 "uuid": "10fc95ca-71b1-46d6-9721-67c0041fa87d", 00:12:34.009 "is_configured": true, 00:12:34.009 "data_offset": 0, 00:12:34.009 "data_size": 65536 00:12:34.009 }, 00:12:34.009 { 00:12:34.009 "name": "BaseBdev4", 00:12:34.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.009 "is_configured": false, 00:12:34.009 "data_offset": 0, 00:12:34.009 "data_size": 0 00:12:34.009 } 00:12:34.009 ] 00:12:34.009 }' 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.009 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.578 [2024-11-05 11:28:33.699299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:34.578 [2024-11-05 11:28:33.699350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:34.578 [2024-11-05 11:28:33.699375] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:34.578 [2024-11-05 11:28:33.699643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:34.578 [2024-11-05 11:28:33.699831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:34.578 [2024-11-05 11:28:33.699852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:34.578 [2024-11-05 11:28:33.700099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.578 BaseBdev4 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.578 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.578 [ 00:12:34.578 { 00:12:34.578 "name": "BaseBdev4", 00:12:34.578 "aliases": [ 00:12:34.578 "adcecbed-c502-4c37-8b34-2eaad84b6de0" 00:12:34.578 ], 00:12:34.578 "product_name": "Malloc disk", 00:12:34.578 "block_size": 512, 00:12:34.578 "num_blocks": 65536, 00:12:34.578 "uuid": "adcecbed-c502-4c37-8b34-2eaad84b6de0", 00:12:34.578 "assigned_rate_limits": { 00:12:34.578 "rw_ios_per_sec": 0, 00:12:34.578 "rw_mbytes_per_sec": 0, 00:12:34.578 "r_mbytes_per_sec": 0, 00:12:34.578 "w_mbytes_per_sec": 0 00:12:34.578 }, 00:12:34.578 "claimed": true, 00:12:34.578 "claim_type": "exclusive_write", 00:12:34.579 "zoned": false, 00:12:34.579 "supported_io_types": { 00:12:34.579 "read": true, 00:12:34.579 "write": true, 00:12:34.579 "unmap": true, 00:12:34.579 "flush": true, 00:12:34.579 "reset": true, 00:12:34.579 "nvme_admin": false, 00:12:34.579 "nvme_io": false, 00:12:34.579 "nvme_io_md": false, 00:12:34.579 "write_zeroes": true, 00:12:34.579 "zcopy": true, 00:12:34.579 "get_zone_info": false, 00:12:34.579 "zone_management": false, 00:12:34.579 "zone_append": false, 00:12:34.579 "compare": false, 00:12:34.579 "compare_and_write": false, 00:12:34.579 "abort": true, 00:12:34.579 "seek_hole": false, 00:12:34.579 "seek_data": false, 00:12:34.579 "copy": true, 00:12:34.579 "nvme_iov_md": false 00:12:34.579 }, 00:12:34.579 "memory_domains": [ 00:12:34.579 { 00:12:34.579 "dma_device_id": "system", 00:12:34.579 "dma_device_type": 1 00:12:34.579 }, 00:12:34.579 { 00:12:34.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.579 "dma_device_type": 2 00:12:34.579 } 00:12:34.579 ], 00:12:34.579 "driver_specific": {} 00:12:34.579 } 00:12:34.579 ] 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.579 "name": "Existed_Raid", 00:12:34.579 "uuid": "175bdb57-ab77-4045-b248-583475c6fe60", 00:12:34.579 "strip_size_kb": 64, 00:12:34.579 "state": "online", 00:12:34.579 "raid_level": "concat", 00:12:34.579 "superblock": false, 00:12:34.579 "num_base_bdevs": 4, 00:12:34.579 "num_base_bdevs_discovered": 4, 00:12:34.579 "num_base_bdevs_operational": 4, 00:12:34.579 "base_bdevs_list": [ 00:12:34.579 { 00:12:34.579 "name": "BaseBdev1", 00:12:34.579 "uuid": "f1525a4f-7dd5-40f9-a3e6-97eb821bddbd", 00:12:34.579 "is_configured": true, 00:12:34.579 "data_offset": 0, 00:12:34.579 "data_size": 65536 00:12:34.579 }, 00:12:34.579 { 00:12:34.579 "name": "BaseBdev2", 00:12:34.579 "uuid": "b61a5e4e-2c11-4cba-bbc2-337084655a4e", 00:12:34.579 "is_configured": true, 00:12:34.579 "data_offset": 0, 00:12:34.579 "data_size": 65536 00:12:34.579 }, 00:12:34.579 { 00:12:34.579 "name": "BaseBdev3", 00:12:34.579 "uuid": "10fc95ca-71b1-46d6-9721-67c0041fa87d", 00:12:34.579 "is_configured": true, 00:12:34.579 "data_offset": 0, 00:12:34.579 "data_size": 65536 00:12:34.579 }, 00:12:34.579 { 00:12:34.579 "name": "BaseBdev4", 00:12:34.579 "uuid": "adcecbed-c502-4c37-8b34-2eaad84b6de0", 00:12:34.579 "is_configured": true, 00:12:34.579 "data_offset": 0, 00:12:34.579 "data_size": 65536 00:12:34.579 } 00:12:34.579 ] 00:12:34.579 }' 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.579 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.148 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:35.148 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:35.148 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:35.148 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:35.148 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:35.148 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:35.148 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:35.148 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:35.148 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.148 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.148 [2024-11-05 11:28:34.174916] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:35.148 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.148 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:35.148 "name": "Existed_Raid", 00:12:35.148 "aliases": [ 00:12:35.148 "175bdb57-ab77-4045-b248-583475c6fe60" 00:12:35.148 ], 00:12:35.148 "product_name": "Raid Volume", 00:12:35.148 "block_size": 512, 00:12:35.148 "num_blocks": 262144, 00:12:35.148 "uuid": "175bdb57-ab77-4045-b248-583475c6fe60", 00:12:35.148 "assigned_rate_limits": { 00:12:35.148 "rw_ios_per_sec": 0, 00:12:35.148 "rw_mbytes_per_sec": 0, 00:12:35.148 "r_mbytes_per_sec": 0, 00:12:35.148 "w_mbytes_per_sec": 0 00:12:35.148 }, 00:12:35.148 "claimed": false, 00:12:35.148 "zoned": false, 00:12:35.148 "supported_io_types": { 00:12:35.148 "read": true, 00:12:35.148 "write": true, 00:12:35.148 "unmap": true, 00:12:35.148 "flush": true, 00:12:35.148 "reset": true, 00:12:35.148 "nvme_admin": false, 00:12:35.148 "nvme_io": false, 00:12:35.148 "nvme_io_md": false, 00:12:35.148 "write_zeroes": true, 00:12:35.148 "zcopy": false, 00:12:35.148 "get_zone_info": false, 00:12:35.148 "zone_management": false, 00:12:35.148 "zone_append": false, 00:12:35.148 "compare": false, 00:12:35.148 "compare_and_write": false, 00:12:35.148 "abort": false, 00:12:35.148 "seek_hole": false, 00:12:35.148 "seek_data": false, 00:12:35.148 "copy": false, 00:12:35.148 "nvme_iov_md": false 00:12:35.148 }, 00:12:35.148 "memory_domains": [ 00:12:35.148 { 00:12:35.148 "dma_device_id": "system", 00:12:35.148 "dma_device_type": 1 00:12:35.148 }, 00:12:35.148 { 00:12:35.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.148 "dma_device_type": 2 00:12:35.148 }, 00:12:35.148 { 00:12:35.148 "dma_device_id": "system", 00:12:35.148 "dma_device_type": 1 00:12:35.148 }, 00:12:35.148 { 00:12:35.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.148 "dma_device_type": 2 00:12:35.148 }, 00:12:35.148 { 00:12:35.148 "dma_device_id": "system", 00:12:35.148 "dma_device_type": 1 00:12:35.148 }, 00:12:35.148 { 00:12:35.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.148 "dma_device_type": 2 00:12:35.148 }, 00:12:35.148 { 00:12:35.148 "dma_device_id": "system", 00:12:35.148 "dma_device_type": 1 00:12:35.148 }, 00:12:35.148 { 00:12:35.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.148 "dma_device_type": 2 00:12:35.148 } 00:12:35.148 ], 00:12:35.148 "driver_specific": { 00:12:35.148 "raid": { 00:12:35.148 "uuid": "175bdb57-ab77-4045-b248-583475c6fe60", 00:12:35.148 "strip_size_kb": 64, 00:12:35.148 "state": "online", 00:12:35.148 "raid_level": "concat", 00:12:35.148 "superblock": false, 00:12:35.148 "num_base_bdevs": 4, 00:12:35.148 "num_base_bdevs_discovered": 4, 00:12:35.148 "num_base_bdevs_operational": 4, 00:12:35.148 "base_bdevs_list": [ 00:12:35.148 { 00:12:35.148 "name": "BaseBdev1", 00:12:35.148 "uuid": "f1525a4f-7dd5-40f9-a3e6-97eb821bddbd", 00:12:35.148 "is_configured": true, 00:12:35.148 "data_offset": 0, 00:12:35.148 "data_size": 65536 00:12:35.148 }, 00:12:35.148 { 00:12:35.148 "name": "BaseBdev2", 00:12:35.148 "uuid": "b61a5e4e-2c11-4cba-bbc2-337084655a4e", 00:12:35.148 "is_configured": true, 00:12:35.148 "data_offset": 0, 00:12:35.148 "data_size": 65536 00:12:35.148 }, 00:12:35.148 { 00:12:35.148 "name": "BaseBdev3", 00:12:35.148 "uuid": "10fc95ca-71b1-46d6-9721-67c0041fa87d", 00:12:35.148 "is_configured": true, 00:12:35.148 "data_offset": 0, 00:12:35.148 "data_size": 65536 00:12:35.148 }, 00:12:35.148 { 00:12:35.148 "name": "BaseBdev4", 00:12:35.148 "uuid": "adcecbed-c502-4c37-8b34-2eaad84b6de0", 00:12:35.148 "is_configured": true, 00:12:35.148 "data_offset": 0, 00:12:35.148 "data_size": 65536 00:12:35.148 } 00:12:35.148 ] 00:12:35.148 } 00:12:35.148 } 00:12:35.148 }' 00:12:35.148 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:35.148 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:35.148 BaseBdev2 00:12:35.148 BaseBdev3 00:12:35.148 BaseBdev4' 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.149 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.408 [2024-11-05 11:28:34.510111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:35.408 [2024-11-05 11:28:34.510159] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.408 [2024-11-05 11:28:34.510217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.408 "name": "Existed_Raid", 00:12:35.408 "uuid": "175bdb57-ab77-4045-b248-583475c6fe60", 00:12:35.408 "strip_size_kb": 64, 00:12:35.408 "state": "offline", 00:12:35.408 "raid_level": "concat", 00:12:35.408 "superblock": false, 00:12:35.408 "num_base_bdevs": 4, 00:12:35.408 "num_base_bdevs_discovered": 3, 00:12:35.408 "num_base_bdevs_operational": 3, 00:12:35.408 "base_bdevs_list": [ 00:12:35.408 { 00:12:35.408 "name": null, 00:12:35.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.408 "is_configured": false, 00:12:35.408 "data_offset": 0, 00:12:35.408 "data_size": 65536 00:12:35.408 }, 00:12:35.408 { 00:12:35.408 "name": "BaseBdev2", 00:12:35.408 "uuid": "b61a5e4e-2c11-4cba-bbc2-337084655a4e", 00:12:35.408 "is_configured": true, 00:12:35.408 "data_offset": 0, 00:12:35.408 "data_size": 65536 00:12:35.408 }, 00:12:35.408 { 00:12:35.408 "name": "BaseBdev3", 00:12:35.408 "uuid": "10fc95ca-71b1-46d6-9721-67c0041fa87d", 00:12:35.408 "is_configured": true, 00:12:35.408 "data_offset": 0, 00:12:35.408 "data_size": 65536 00:12:35.408 }, 00:12:35.408 { 00:12:35.408 "name": "BaseBdev4", 00:12:35.408 "uuid": "adcecbed-c502-4c37-8b34-2eaad84b6de0", 00:12:35.408 "is_configured": true, 00:12:35.408 "data_offset": 0, 00:12:35.408 "data_size": 65536 00:12:35.408 } 00:12:35.408 ] 00:12:35.408 }' 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.408 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.975 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:35.975 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.975 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.975 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.975 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.976 [2024-11-05 11:28:35.099129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.976 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.976 [2024-11-05 11:28:35.245846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.235 [2024-11-05 11:28:35.400941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:36.235 [2024-11-05 11:28:35.400995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.235 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.495 BaseBdev2 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.495 [ 00:12:36.495 { 00:12:36.495 "name": "BaseBdev2", 00:12:36.495 "aliases": [ 00:12:36.495 "7d234a2b-ef98-4b39-b894-b4f9b16ce48d" 00:12:36.495 ], 00:12:36.495 "product_name": "Malloc disk", 00:12:36.495 "block_size": 512, 00:12:36.495 "num_blocks": 65536, 00:12:36.495 "uuid": "7d234a2b-ef98-4b39-b894-b4f9b16ce48d", 00:12:36.495 "assigned_rate_limits": { 00:12:36.495 "rw_ios_per_sec": 0, 00:12:36.495 "rw_mbytes_per_sec": 0, 00:12:36.495 "r_mbytes_per_sec": 0, 00:12:36.495 "w_mbytes_per_sec": 0 00:12:36.495 }, 00:12:36.495 "claimed": false, 00:12:36.495 "zoned": false, 00:12:36.495 "supported_io_types": { 00:12:36.495 "read": true, 00:12:36.495 "write": true, 00:12:36.495 "unmap": true, 00:12:36.495 "flush": true, 00:12:36.495 "reset": true, 00:12:36.495 "nvme_admin": false, 00:12:36.495 "nvme_io": false, 00:12:36.495 "nvme_io_md": false, 00:12:36.495 "write_zeroes": true, 00:12:36.495 "zcopy": true, 00:12:36.495 "get_zone_info": false, 00:12:36.495 "zone_management": false, 00:12:36.495 "zone_append": false, 00:12:36.495 "compare": false, 00:12:36.495 "compare_and_write": false, 00:12:36.495 "abort": true, 00:12:36.495 "seek_hole": false, 00:12:36.495 "seek_data": false, 00:12:36.495 "copy": true, 00:12:36.495 "nvme_iov_md": false 00:12:36.495 }, 00:12:36.495 "memory_domains": [ 00:12:36.495 { 00:12:36.495 "dma_device_id": "system", 00:12:36.495 "dma_device_type": 1 00:12:36.495 }, 00:12:36.495 { 00:12:36.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.495 "dma_device_type": 2 00:12:36.495 } 00:12:36.495 ], 00:12:36.495 "driver_specific": {} 00:12:36.495 } 00:12:36.495 ] 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.495 BaseBdev3 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.495 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.495 [ 00:12:36.495 { 00:12:36.495 "name": "BaseBdev3", 00:12:36.495 "aliases": [ 00:12:36.495 "b8e8c381-e557-4aec-9619-1c1f8f5ef496" 00:12:36.495 ], 00:12:36.495 "product_name": "Malloc disk", 00:12:36.495 "block_size": 512, 00:12:36.495 "num_blocks": 65536, 00:12:36.495 "uuid": "b8e8c381-e557-4aec-9619-1c1f8f5ef496", 00:12:36.495 "assigned_rate_limits": { 00:12:36.495 "rw_ios_per_sec": 0, 00:12:36.495 "rw_mbytes_per_sec": 0, 00:12:36.495 "r_mbytes_per_sec": 0, 00:12:36.495 "w_mbytes_per_sec": 0 00:12:36.495 }, 00:12:36.495 "claimed": false, 00:12:36.495 "zoned": false, 00:12:36.495 "supported_io_types": { 00:12:36.495 "read": true, 00:12:36.495 "write": true, 00:12:36.495 "unmap": true, 00:12:36.495 "flush": true, 00:12:36.495 "reset": true, 00:12:36.495 "nvme_admin": false, 00:12:36.495 "nvme_io": false, 00:12:36.495 "nvme_io_md": false, 00:12:36.495 "write_zeroes": true, 00:12:36.495 "zcopy": true, 00:12:36.495 "get_zone_info": false, 00:12:36.495 "zone_management": false, 00:12:36.495 "zone_append": false, 00:12:36.495 "compare": false, 00:12:36.495 "compare_and_write": false, 00:12:36.495 "abort": true, 00:12:36.495 "seek_hole": false, 00:12:36.495 "seek_data": false, 00:12:36.495 "copy": true, 00:12:36.495 "nvme_iov_md": false 00:12:36.495 }, 00:12:36.495 "memory_domains": [ 00:12:36.495 { 00:12:36.496 "dma_device_id": "system", 00:12:36.496 "dma_device_type": 1 00:12:36.496 }, 00:12:36.496 { 00:12:36.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.496 "dma_device_type": 2 00:12:36.496 } 00:12:36.496 ], 00:12:36.496 "driver_specific": {} 00:12:36.496 } 00:12:36.496 ] 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.496 BaseBdev4 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.496 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.496 [ 00:12:36.496 { 00:12:36.496 "name": "BaseBdev4", 00:12:36.496 "aliases": [ 00:12:36.496 "5936c494-1624-4116-8349-1291317e177e" 00:12:36.496 ], 00:12:36.496 "product_name": "Malloc disk", 00:12:36.496 "block_size": 512, 00:12:36.496 "num_blocks": 65536, 00:12:36.496 "uuid": "5936c494-1624-4116-8349-1291317e177e", 00:12:36.496 "assigned_rate_limits": { 00:12:36.755 "rw_ios_per_sec": 0, 00:12:36.755 "rw_mbytes_per_sec": 0, 00:12:36.755 "r_mbytes_per_sec": 0, 00:12:36.755 "w_mbytes_per_sec": 0 00:12:36.755 }, 00:12:36.755 "claimed": false, 00:12:36.755 "zoned": false, 00:12:36.755 "supported_io_types": { 00:12:36.755 "read": true, 00:12:36.755 "write": true, 00:12:36.755 "unmap": true, 00:12:36.755 "flush": true, 00:12:36.755 "reset": true, 00:12:36.755 "nvme_admin": false, 00:12:36.755 "nvme_io": false, 00:12:36.755 "nvme_io_md": false, 00:12:36.755 "write_zeroes": true, 00:12:36.755 "zcopy": true, 00:12:36.755 "get_zone_info": false, 00:12:36.755 "zone_management": false, 00:12:36.755 "zone_append": false, 00:12:36.755 "compare": false, 00:12:36.755 "compare_and_write": false, 00:12:36.755 "abort": true, 00:12:36.755 "seek_hole": false, 00:12:36.755 "seek_data": false, 00:12:36.755 "copy": true, 00:12:36.755 "nvme_iov_md": false 00:12:36.755 }, 00:12:36.755 "memory_domains": [ 00:12:36.755 { 00:12:36.755 "dma_device_id": "system", 00:12:36.755 "dma_device_type": 1 00:12:36.755 }, 00:12:36.755 { 00:12:36.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.755 "dma_device_type": 2 00:12:36.755 } 00:12:36.755 ], 00:12:36.755 "driver_specific": {} 00:12:36.755 } 00:12:36.755 ] 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.755 [2024-11-05 11:28:35.782794] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:36.755 [2024-11-05 11:28:35.782854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:36.755 [2024-11-05 11:28:35.782875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.755 [2024-11-05 11:28:35.784814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.755 [2024-11-05 11:28:35.784868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.755 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.756 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.756 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.756 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.756 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.756 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.756 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.756 "name": "Existed_Raid", 00:12:36.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.756 "strip_size_kb": 64, 00:12:36.756 "state": "configuring", 00:12:36.756 "raid_level": "concat", 00:12:36.756 "superblock": false, 00:12:36.756 "num_base_bdevs": 4, 00:12:36.756 "num_base_bdevs_discovered": 3, 00:12:36.756 "num_base_bdevs_operational": 4, 00:12:36.756 "base_bdevs_list": [ 00:12:36.756 { 00:12:36.756 "name": "BaseBdev1", 00:12:36.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.756 "is_configured": false, 00:12:36.756 "data_offset": 0, 00:12:36.756 "data_size": 0 00:12:36.756 }, 00:12:36.756 { 00:12:36.756 "name": "BaseBdev2", 00:12:36.756 "uuid": "7d234a2b-ef98-4b39-b894-b4f9b16ce48d", 00:12:36.756 "is_configured": true, 00:12:36.756 "data_offset": 0, 00:12:36.756 "data_size": 65536 00:12:36.756 }, 00:12:36.756 { 00:12:36.756 "name": "BaseBdev3", 00:12:36.756 "uuid": "b8e8c381-e557-4aec-9619-1c1f8f5ef496", 00:12:36.756 "is_configured": true, 00:12:36.756 "data_offset": 0, 00:12:36.756 "data_size": 65536 00:12:36.756 }, 00:12:36.756 { 00:12:36.756 "name": "BaseBdev4", 00:12:36.756 "uuid": "5936c494-1624-4116-8349-1291317e177e", 00:12:36.756 "is_configured": true, 00:12:36.756 "data_offset": 0, 00:12:36.756 "data_size": 65536 00:12:36.756 } 00:12:36.756 ] 00:12:36.756 }' 00:12:36.756 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.756 11:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.015 [2024-11-05 11:28:36.190156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.015 "name": "Existed_Raid", 00:12:37.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.015 "strip_size_kb": 64, 00:12:37.015 "state": "configuring", 00:12:37.015 "raid_level": "concat", 00:12:37.015 "superblock": false, 00:12:37.015 "num_base_bdevs": 4, 00:12:37.015 "num_base_bdevs_discovered": 2, 00:12:37.015 "num_base_bdevs_operational": 4, 00:12:37.015 "base_bdevs_list": [ 00:12:37.015 { 00:12:37.015 "name": "BaseBdev1", 00:12:37.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.015 "is_configured": false, 00:12:37.015 "data_offset": 0, 00:12:37.015 "data_size": 0 00:12:37.015 }, 00:12:37.015 { 00:12:37.015 "name": null, 00:12:37.015 "uuid": "7d234a2b-ef98-4b39-b894-b4f9b16ce48d", 00:12:37.015 "is_configured": false, 00:12:37.015 "data_offset": 0, 00:12:37.015 "data_size": 65536 00:12:37.015 }, 00:12:37.015 { 00:12:37.015 "name": "BaseBdev3", 00:12:37.015 "uuid": "b8e8c381-e557-4aec-9619-1c1f8f5ef496", 00:12:37.015 "is_configured": true, 00:12:37.015 "data_offset": 0, 00:12:37.015 "data_size": 65536 00:12:37.015 }, 00:12:37.015 { 00:12:37.015 "name": "BaseBdev4", 00:12:37.015 "uuid": "5936c494-1624-4116-8349-1291317e177e", 00:12:37.015 "is_configured": true, 00:12:37.015 "data_offset": 0, 00:12:37.015 "data_size": 65536 00:12:37.015 } 00:12:37.015 ] 00:12:37.015 }' 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.015 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.582 [2024-11-05 11:28:36.621275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.582 BaseBdev1 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.582 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.582 [ 00:12:37.582 { 00:12:37.583 "name": "BaseBdev1", 00:12:37.583 "aliases": [ 00:12:37.583 "336d7793-578a-461b-94fe-8781c102e7e0" 00:12:37.583 ], 00:12:37.583 "product_name": "Malloc disk", 00:12:37.583 "block_size": 512, 00:12:37.583 "num_blocks": 65536, 00:12:37.583 "uuid": "336d7793-578a-461b-94fe-8781c102e7e0", 00:12:37.583 "assigned_rate_limits": { 00:12:37.583 "rw_ios_per_sec": 0, 00:12:37.583 "rw_mbytes_per_sec": 0, 00:12:37.583 "r_mbytes_per_sec": 0, 00:12:37.583 "w_mbytes_per_sec": 0 00:12:37.583 }, 00:12:37.583 "claimed": true, 00:12:37.583 "claim_type": "exclusive_write", 00:12:37.583 "zoned": false, 00:12:37.583 "supported_io_types": { 00:12:37.583 "read": true, 00:12:37.583 "write": true, 00:12:37.583 "unmap": true, 00:12:37.583 "flush": true, 00:12:37.583 "reset": true, 00:12:37.583 "nvme_admin": false, 00:12:37.583 "nvme_io": false, 00:12:37.583 "nvme_io_md": false, 00:12:37.583 "write_zeroes": true, 00:12:37.583 "zcopy": true, 00:12:37.583 "get_zone_info": false, 00:12:37.583 "zone_management": false, 00:12:37.583 "zone_append": false, 00:12:37.583 "compare": false, 00:12:37.583 "compare_and_write": false, 00:12:37.583 "abort": true, 00:12:37.583 "seek_hole": false, 00:12:37.583 "seek_data": false, 00:12:37.583 "copy": true, 00:12:37.583 "nvme_iov_md": false 00:12:37.583 }, 00:12:37.583 "memory_domains": [ 00:12:37.583 { 00:12:37.583 "dma_device_id": "system", 00:12:37.583 "dma_device_type": 1 00:12:37.583 }, 00:12:37.583 { 00:12:37.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.583 "dma_device_type": 2 00:12:37.583 } 00:12:37.583 ], 00:12:37.583 "driver_specific": {} 00:12:37.583 } 00:12:37.583 ] 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.583 "name": "Existed_Raid", 00:12:37.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.583 "strip_size_kb": 64, 00:12:37.583 "state": "configuring", 00:12:37.583 "raid_level": "concat", 00:12:37.583 "superblock": false, 00:12:37.583 "num_base_bdevs": 4, 00:12:37.583 "num_base_bdevs_discovered": 3, 00:12:37.583 "num_base_bdevs_operational": 4, 00:12:37.583 "base_bdevs_list": [ 00:12:37.583 { 00:12:37.583 "name": "BaseBdev1", 00:12:37.583 "uuid": "336d7793-578a-461b-94fe-8781c102e7e0", 00:12:37.583 "is_configured": true, 00:12:37.583 "data_offset": 0, 00:12:37.583 "data_size": 65536 00:12:37.583 }, 00:12:37.583 { 00:12:37.583 "name": null, 00:12:37.583 "uuid": "7d234a2b-ef98-4b39-b894-b4f9b16ce48d", 00:12:37.583 "is_configured": false, 00:12:37.583 "data_offset": 0, 00:12:37.583 "data_size": 65536 00:12:37.583 }, 00:12:37.583 { 00:12:37.583 "name": "BaseBdev3", 00:12:37.583 "uuid": "b8e8c381-e557-4aec-9619-1c1f8f5ef496", 00:12:37.583 "is_configured": true, 00:12:37.583 "data_offset": 0, 00:12:37.583 "data_size": 65536 00:12:37.583 }, 00:12:37.583 { 00:12:37.583 "name": "BaseBdev4", 00:12:37.583 "uuid": "5936c494-1624-4116-8349-1291317e177e", 00:12:37.583 "is_configured": true, 00:12:37.583 "data_offset": 0, 00:12:37.583 "data_size": 65536 00:12:37.583 } 00:12:37.583 ] 00:12:37.583 }' 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.583 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.150 [2024-11-05 11:28:37.184381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.150 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.151 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.151 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.151 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.151 "name": "Existed_Raid", 00:12:38.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.151 "strip_size_kb": 64, 00:12:38.151 "state": "configuring", 00:12:38.151 "raid_level": "concat", 00:12:38.151 "superblock": false, 00:12:38.151 "num_base_bdevs": 4, 00:12:38.151 "num_base_bdevs_discovered": 2, 00:12:38.151 "num_base_bdevs_operational": 4, 00:12:38.151 "base_bdevs_list": [ 00:12:38.151 { 00:12:38.151 "name": "BaseBdev1", 00:12:38.151 "uuid": "336d7793-578a-461b-94fe-8781c102e7e0", 00:12:38.151 "is_configured": true, 00:12:38.151 "data_offset": 0, 00:12:38.151 "data_size": 65536 00:12:38.151 }, 00:12:38.151 { 00:12:38.151 "name": null, 00:12:38.151 "uuid": "7d234a2b-ef98-4b39-b894-b4f9b16ce48d", 00:12:38.151 "is_configured": false, 00:12:38.151 "data_offset": 0, 00:12:38.151 "data_size": 65536 00:12:38.151 }, 00:12:38.151 { 00:12:38.151 "name": null, 00:12:38.151 "uuid": "b8e8c381-e557-4aec-9619-1c1f8f5ef496", 00:12:38.151 "is_configured": false, 00:12:38.151 "data_offset": 0, 00:12:38.151 "data_size": 65536 00:12:38.151 }, 00:12:38.151 { 00:12:38.151 "name": "BaseBdev4", 00:12:38.151 "uuid": "5936c494-1624-4116-8349-1291317e177e", 00:12:38.151 "is_configured": true, 00:12:38.151 "data_offset": 0, 00:12:38.151 "data_size": 65536 00:12:38.151 } 00:12:38.151 ] 00:12:38.151 }' 00:12:38.151 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.151 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.409 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.409 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:38.409 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.409 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.409 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.409 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:38.409 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:38.409 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.409 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.409 [2024-11-05 11:28:37.647620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.409 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.409 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:38.409 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.409 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.409 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.410 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.410 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.410 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.410 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.410 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.410 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.410 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.410 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.410 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.410 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.410 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.668 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.668 "name": "Existed_Raid", 00:12:38.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.668 "strip_size_kb": 64, 00:12:38.668 "state": "configuring", 00:12:38.668 "raid_level": "concat", 00:12:38.668 "superblock": false, 00:12:38.668 "num_base_bdevs": 4, 00:12:38.668 "num_base_bdevs_discovered": 3, 00:12:38.668 "num_base_bdevs_operational": 4, 00:12:38.668 "base_bdevs_list": [ 00:12:38.668 { 00:12:38.668 "name": "BaseBdev1", 00:12:38.668 "uuid": "336d7793-578a-461b-94fe-8781c102e7e0", 00:12:38.668 "is_configured": true, 00:12:38.668 "data_offset": 0, 00:12:38.668 "data_size": 65536 00:12:38.668 }, 00:12:38.668 { 00:12:38.668 "name": null, 00:12:38.668 "uuid": "7d234a2b-ef98-4b39-b894-b4f9b16ce48d", 00:12:38.668 "is_configured": false, 00:12:38.668 "data_offset": 0, 00:12:38.668 "data_size": 65536 00:12:38.668 }, 00:12:38.668 { 00:12:38.668 "name": "BaseBdev3", 00:12:38.668 "uuid": "b8e8c381-e557-4aec-9619-1c1f8f5ef496", 00:12:38.668 "is_configured": true, 00:12:38.668 "data_offset": 0, 00:12:38.668 "data_size": 65536 00:12:38.668 }, 00:12:38.668 { 00:12:38.668 "name": "BaseBdev4", 00:12:38.668 "uuid": "5936c494-1624-4116-8349-1291317e177e", 00:12:38.668 "is_configured": true, 00:12:38.668 "data_offset": 0, 00:12:38.668 "data_size": 65536 00:12:38.668 } 00:12:38.668 ] 00:12:38.668 }' 00:12:38.668 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.668 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.927 [2024-11-05 11:28:38.078988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.927 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.186 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.186 "name": "Existed_Raid", 00:12:39.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.186 "strip_size_kb": 64, 00:12:39.186 "state": "configuring", 00:12:39.186 "raid_level": "concat", 00:12:39.186 "superblock": false, 00:12:39.186 "num_base_bdevs": 4, 00:12:39.186 "num_base_bdevs_discovered": 2, 00:12:39.186 "num_base_bdevs_operational": 4, 00:12:39.186 "base_bdevs_list": [ 00:12:39.186 { 00:12:39.186 "name": null, 00:12:39.186 "uuid": "336d7793-578a-461b-94fe-8781c102e7e0", 00:12:39.186 "is_configured": false, 00:12:39.186 "data_offset": 0, 00:12:39.186 "data_size": 65536 00:12:39.186 }, 00:12:39.186 { 00:12:39.186 "name": null, 00:12:39.186 "uuid": "7d234a2b-ef98-4b39-b894-b4f9b16ce48d", 00:12:39.186 "is_configured": false, 00:12:39.186 "data_offset": 0, 00:12:39.186 "data_size": 65536 00:12:39.186 }, 00:12:39.186 { 00:12:39.186 "name": "BaseBdev3", 00:12:39.187 "uuid": "b8e8c381-e557-4aec-9619-1c1f8f5ef496", 00:12:39.187 "is_configured": true, 00:12:39.187 "data_offset": 0, 00:12:39.187 "data_size": 65536 00:12:39.187 }, 00:12:39.187 { 00:12:39.187 "name": "BaseBdev4", 00:12:39.187 "uuid": "5936c494-1624-4116-8349-1291317e177e", 00:12:39.187 "is_configured": true, 00:12:39.187 "data_offset": 0, 00:12:39.187 "data_size": 65536 00:12:39.187 } 00:12:39.187 ] 00:12:39.187 }' 00:12:39.187 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.187 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.446 [2024-11-05 11:28:38.654083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.446 "name": "Existed_Raid", 00:12:39.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.446 "strip_size_kb": 64, 00:12:39.446 "state": "configuring", 00:12:39.446 "raid_level": "concat", 00:12:39.446 "superblock": false, 00:12:39.446 "num_base_bdevs": 4, 00:12:39.446 "num_base_bdevs_discovered": 3, 00:12:39.446 "num_base_bdevs_operational": 4, 00:12:39.446 "base_bdevs_list": [ 00:12:39.446 { 00:12:39.446 "name": null, 00:12:39.446 "uuid": "336d7793-578a-461b-94fe-8781c102e7e0", 00:12:39.446 "is_configured": false, 00:12:39.446 "data_offset": 0, 00:12:39.446 "data_size": 65536 00:12:39.446 }, 00:12:39.446 { 00:12:39.446 "name": "BaseBdev2", 00:12:39.446 "uuid": "7d234a2b-ef98-4b39-b894-b4f9b16ce48d", 00:12:39.446 "is_configured": true, 00:12:39.446 "data_offset": 0, 00:12:39.446 "data_size": 65536 00:12:39.446 }, 00:12:39.446 { 00:12:39.446 "name": "BaseBdev3", 00:12:39.446 "uuid": "b8e8c381-e557-4aec-9619-1c1f8f5ef496", 00:12:39.446 "is_configured": true, 00:12:39.446 "data_offset": 0, 00:12:39.446 "data_size": 65536 00:12:39.446 }, 00:12:39.446 { 00:12:39.446 "name": "BaseBdev4", 00:12:39.446 "uuid": "5936c494-1624-4116-8349-1291317e177e", 00:12:39.446 "is_configured": true, 00:12:39.446 "data_offset": 0, 00:12:39.446 "data_size": 65536 00:12:39.446 } 00:12:39.446 ] 00:12:39.446 }' 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.446 11:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 336d7793-578a-461b-94fe-8781c102e7e0 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.014 [2024-11-05 11:28:39.249277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:40.014 [2024-11-05 11:28:39.249332] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:40.014 [2024-11-05 11:28:39.249339] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:40.014 [2024-11-05 11:28:39.249585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:40.014 [2024-11-05 11:28:39.249740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:40.014 [2024-11-05 11:28:39.249762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:40.014 [2024-11-05 11:28:39.250022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.014 NewBaseBdev 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.014 [ 00:12:40.014 { 00:12:40.014 "name": "NewBaseBdev", 00:12:40.014 "aliases": [ 00:12:40.014 "336d7793-578a-461b-94fe-8781c102e7e0" 00:12:40.014 ], 00:12:40.014 "product_name": "Malloc disk", 00:12:40.014 "block_size": 512, 00:12:40.014 "num_blocks": 65536, 00:12:40.014 "uuid": "336d7793-578a-461b-94fe-8781c102e7e0", 00:12:40.014 "assigned_rate_limits": { 00:12:40.014 "rw_ios_per_sec": 0, 00:12:40.014 "rw_mbytes_per_sec": 0, 00:12:40.014 "r_mbytes_per_sec": 0, 00:12:40.014 "w_mbytes_per_sec": 0 00:12:40.014 }, 00:12:40.014 "claimed": true, 00:12:40.014 "claim_type": "exclusive_write", 00:12:40.014 "zoned": false, 00:12:40.014 "supported_io_types": { 00:12:40.014 "read": true, 00:12:40.014 "write": true, 00:12:40.014 "unmap": true, 00:12:40.014 "flush": true, 00:12:40.014 "reset": true, 00:12:40.014 "nvme_admin": false, 00:12:40.014 "nvme_io": false, 00:12:40.014 "nvme_io_md": false, 00:12:40.014 "write_zeroes": true, 00:12:40.014 "zcopy": true, 00:12:40.014 "get_zone_info": false, 00:12:40.014 "zone_management": false, 00:12:40.014 "zone_append": false, 00:12:40.014 "compare": false, 00:12:40.014 "compare_and_write": false, 00:12:40.014 "abort": true, 00:12:40.014 "seek_hole": false, 00:12:40.014 "seek_data": false, 00:12:40.014 "copy": true, 00:12:40.014 "nvme_iov_md": false 00:12:40.014 }, 00:12:40.014 "memory_domains": [ 00:12:40.014 { 00:12:40.014 "dma_device_id": "system", 00:12:40.014 "dma_device_type": 1 00:12:40.014 }, 00:12:40.014 { 00:12:40.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.014 "dma_device_type": 2 00:12:40.014 } 00:12:40.014 ], 00:12:40.014 "driver_specific": {} 00:12:40.014 } 00:12:40.014 ] 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.014 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.274 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.274 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.274 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.274 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.274 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.274 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.274 "name": "Existed_Raid", 00:12:40.274 "uuid": "1ede976c-90b0-46fa-9f4c-4b707d0e9566", 00:12:40.274 "strip_size_kb": 64, 00:12:40.274 "state": "online", 00:12:40.274 "raid_level": "concat", 00:12:40.274 "superblock": false, 00:12:40.274 "num_base_bdevs": 4, 00:12:40.274 "num_base_bdevs_discovered": 4, 00:12:40.274 "num_base_bdevs_operational": 4, 00:12:40.274 "base_bdevs_list": [ 00:12:40.274 { 00:12:40.274 "name": "NewBaseBdev", 00:12:40.274 "uuid": "336d7793-578a-461b-94fe-8781c102e7e0", 00:12:40.274 "is_configured": true, 00:12:40.274 "data_offset": 0, 00:12:40.274 "data_size": 65536 00:12:40.274 }, 00:12:40.274 { 00:12:40.274 "name": "BaseBdev2", 00:12:40.274 "uuid": "7d234a2b-ef98-4b39-b894-b4f9b16ce48d", 00:12:40.274 "is_configured": true, 00:12:40.274 "data_offset": 0, 00:12:40.274 "data_size": 65536 00:12:40.274 }, 00:12:40.274 { 00:12:40.274 "name": "BaseBdev3", 00:12:40.274 "uuid": "b8e8c381-e557-4aec-9619-1c1f8f5ef496", 00:12:40.274 "is_configured": true, 00:12:40.274 "data_offset": 0, 00:12:40.274 "data_size": 65536 00:12:40.274 }, 00:12:40.274 { 00:12:40.274 "name": "BaseBdev4", 00:12:40.274 "uuid": "5936c494-1624-4116-8349-1291317e177e", 00:12:40.274 "is_configured": true, 00:12:40.274 "data_offset": 0, 00:12:40.274 "data_size": 65536 00:12:40.274 } 00:12:40.274 ] 00:12:40.274 }' 00:12:40.274 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.274 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.538 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:40.538 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:40.538 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:40.538 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:40.538 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:40.538 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:40.538 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:40.538 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:40.538 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.538 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.538 [2024-11-05 11:28:39.696943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.538 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.538 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:40.538 "name": "Existed_Raid", 00:12:40.538 "aliases": [ 00:12:40.538 "1ede976c-90b0-46fa-9f4c-4b707d0e9566" 00:12:40.538 ], 00:12:40.538 "product_name": "Raid Volume", 00:12:40.538 "block_size": 512, 00:12:40.538 "num_blocks": 262144, 00:12:40.538 "uuid": "1ede976c-90b0-46fa-9f4c-4b707d0e9566", 00:12:40.538 "assigned_rate_limits": { 00:12:40.538 "rw_ios_per_sec": 0, 00:12:40.538 "rw_mbytes_per_sec": 0, 00:12:40.538 "r_mbytes_per_sec": 0, 00:12:40.538 "w_mbytes_per_sec": 0 00:12:40.538 }, 00:12:40.538 "claimed": false, 00:12:40.538 "zoned": false, 00:12:40.538 "supported_io_types": { 00:12:40.538 "read": true, 00:12:40.538 "write": true, 00:12:40.538 "unmap": true, 00:12:40.538 "flush": true, 00:12:40.538 "reset": true, 00:12:40.538 "nvme_admin": false, 00:12:40.538 "nvme_io": false, 00:12:40.538 "nvme_io_md": false, 00:12:40.538 "write_zeroes": true, 00:12:40.538 "zcopy": false, 00:12:40.538 "get_zone_info": false, 00:12:40.538 "zone_management": false, 00:12:40.538 "zone_append": false, 00:12:40.538 "compare": false, 00:12:40.538 "compare_and_write": false, 00:12:40.538 "abort": false, 00:12:40.538 "seek_hole": false, 00:12:40.538 "seek_data": false, 00:12:40.538 "copy": false, 00:12:40.538 "nvme_iov_md": false 00:12:40.538 }, 00:12:40.538 "memory_domains": [ 00:12:40.538 { 00:12:40.538 "dma_device_id": "system", 00:12:40.538 "dma_device_type": 1 00:12:40.538 }, 00:12:40.538 { 00:12:40.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.538 "dma_device_type": 2 00:12:40.538 }, 00:12:40.538 { 00:12:40.538 "dma_device_id": "system", 00:12:40.538 "dma_device_type": 1 00:12:40.538 }, 00:12:40.538 { 00:12:40.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.538 "dma_device_type": 2 00:12:40.538 }, 00:12:40.538 { 00:12:40.538 "dma_device_id": "system", 00:12:40.538 "dma_device_type": 1 00:12:40.538 }, 00:12:40.538 { 00:12:40.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.538 "dma_device_type": 2 00:12:40.538 }, 00:12:40.538 { 00:12:40.538 "dma_device_id": "system", 00:12:40.538 "dma_device_type": 1 00:12:40.538 }, 00:12:40.538 { 00:12:40.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.538 "dma_device_type": 2 00:12:40.538 } 00:12:40.538 ], 00:12:40.538 "driver_specific": { 00:12:40.538 "raid": { 00:12:40.538 "uuid": "1ede976c-90b0-46fa-9f4c-4b707d0e9566", 00:12:40.538 "strip_size_kb": 64, 00:12:40.538 "state": "online", 00:12:40.538 "raid_level": "concat", 00:12:40.538 "superblock": false, 00:12:40.538 "num_base_bdevs": 4, 00:12:40.538 "num_base_bdevs_discovered": 4, 00:12:40.538 "num_base_bdevs_operational": 4, 00:12:40.539 "base_bdevs_list": [ 00:12:40.539 { 00:12:40.539 "name": "NewBaseBdev", 00:12:40.539 "uuid": "336d7793-578a-461b-94fe-8781c102e7e0", 00:12:40.539 "is_configured": true, 00:12:40.539 "data_offset": 0, 00:12:40.539 "data_size": 65536 00:12:40.539 }, 00:12:40.539 { 00:12:40.539 "name": "BaseBdev2", 00:12:40.539 "uuid": "7d234a2b-ef98-4b39-b894-b4f9b16ce48d", 00:12:40.539 "is_configured": true, 00:12:40.539 "data_offset": 0, 00:12:40.539 "data_size": 65536 00:12:40.539 }, 00:12:40.539 { 00:12:40.539 "name": "BaseBdev3", 00:12:40.539 "uuid": "b8e8c381-e557-4aec-9619-1c1f8f5ef496", 00:12:40.539 "is_configured": true, 00:12:40.539 "data_offset": 0, 00:12:40.539 "data_size": 65536 00:12:40.539 }, 00:12:40.539 { 00:12:40.539 "name": "BaseBdev4", 00:12:40.539 "uuid": "5936c494-1624-4116-8349-1291317e177e", 00:12:40.539 "is_configured": true, 00:12:40.539 "data_offset": 0, 00:12:40.539 "data_size": 65536 00:12:40.539 } 00:12:40.539 ] 00:12:40.539 } 00:12:40.539 } 00:12:40.539 }' 00:12:40.539 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:40.539 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:40.539 BaseBdev2 00:12:40.539 BaseBdev3 00:12:40.539 BaseBdev4' 00:12:40.539 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.539 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:40.539 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.539 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:40.539 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.539 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.539 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.804 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.804 11:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.804 11:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.804 11:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:40.804 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.805 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.805 [2024-11-05 11:28:40.008068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.805 [2024-11-05 11:28:40.008105] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.805 [2024-11-05 11:28:40.008197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.805 [2024-11-05 11:28:40.008268] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.805 [2024-11-05 11:28:40.008278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:40.805 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.805 11:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71400 00:12:40.805 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71400 ']' 00:12:40.805 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71400 00:12:40.805 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:40.805 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:40.805 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71400 00:12:40.805 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:40.805 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:40.805 killing process with pid 71400 00:12:40.805 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71400' 00:12:40.805 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71400 00:12:40.805 [2024-11-05 11:28:40.057381] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.805 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71400 00:12:41.372 [2024-11-05 11:28:40.437188] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.310 11:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:42.310 00:12:42.310 real 0m11.196s 00:12:42.311 user 0m17.795s 00:12:42.311 sys 0m2.026s 00:12:42.311 11:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:42.311 11:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.311 ************************************ 00:12:42.311 END TEST raid_state_function_test 00:12:42.311 ************************************ 00:12:42.571 11:28:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:42.571 11:28:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:42.571 11:28:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:42.571 11:28:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.571 ************************************ 00:12:42.571 START TEST raid_state_function_test_sb 00:12:42.571 ************************************ 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72069 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72069' 00:12:42.571 Process raid pid: 72069 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72069 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 72069 ']' 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:42.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:42.571 11:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.571 [2024-11-05 11:28:41.708938] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:12:42.571 [2024-11-05 11:28:41.709052] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.830 [2024-11-05 11:28:41.885038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.830 [2024-11-05 11:28:41.997893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.089 [2024-11-05 11:28:42.204817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.089 [2024-11-05 11:28:42.204864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.347 11:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:43.347 11:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:43.347 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:43.347 11:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.347 11:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.347 [2024-11-05 11:28:42.541423] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.347 [2024-11-05 11:28:42.541478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.347 [2024-11-05 11:28:42.541492] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.347 [2024-11-05 11:28:42.541502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.347 [2024-11-05 11:28:42.541508] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:43.347 [2024-11-05 11:28:42.541516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:43.347 [2024-11-05 11:28:42.541522] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:43.347 [2024-11-05 11:28:42.541531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:43.347 11:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.348 "name": "Existed_Raid", 00:12:43.348 "uuid": "91c13e4b-65a8-4b4c-b242-bf2f495a4c71", 00:12:43.348 "strip_size_kb": 64, 00:12:43.348 "state": "configuring", 00:12:43.348 "raid_level": "concat", 00:12:43.348 "superblock": true, 00:12:43.348 "num_base_bdevs": 4, 00:12:43.348 "num_base_bdevs_discovered": 0, 00:12:43.348 "num_base_bdevs_operational": 4, 00:12:43.348 "base_bdevs_list": [ 00:12:43.348 { 00:12:43.348 "name": "BaseBdev1", 00:12:43.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.348 "is_configured": false, 00:12:43.348 "data_offset": 0, 00:12:43.348 "data_size": 0 00:12:43.348 }, 00:12:43.348 { 00:12:43.348 "name": "BaseBdev2", 00:12:43.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.348 "is_configured": false, 00:12:43.348 "data_offset": 0, 00:12:43.348 "data_size": 0 00:12:43.348 }, 00:12:43.348 { 00:12:43.348 "name": "BaseBdev3", 00:12:43.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.348 "is_configured": false, 00:12:43.348 "data_offset": 0, 00:12:43.348 "data_size": 0 00:12:43.348 }, 00:12:43.348 { 00:12:43.348 "name": "BaseBdev4", 00:12:43.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.348 "is_configured": false, 00:12:43.348 "data_offset": 0, 00:12:43.348 "data_size": 0 00:12:43.348 } 00:12:43.348 ] 00:12:43.348 }' 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.348 11:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.917 11:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:43.917 11:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.917 11:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.917 [2024-11-05 11:28:42.996581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:43.917 [2024-11-05 11:28:42.996632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:43.917 11:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.917 [2024-11-05 11:28:43.008585] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.917 [2024-11-05 11:28:43.008636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.917 [2024-11-05 11:28:43.008646] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.917 [2024-11-05 11:28:43.008657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.917 [2024-11-05 11:28:43.008663] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:43.917 [2024-11-05 11:28:43.008674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:43.917 [2024-11-05 11:28:43.008681] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:43.917 [2024-11-05 11:28:43.008691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.917 [2024-11-05 11:28:43.057751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.917 BaseBdev1 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.917 [ 00:12:43.917 { 00:12:43.917 "name": "BaseBdev1", 00:12:43.917 "aliases": [ 00:12:43.917 "ca19dea0-689e-4cd9-af99-24863174da92" 00:12:43.917 ], 00:12:43.917 "product_name": "Malloc disk", 00:12:43.917 "block_size": 512, 00:12:43.917 "num_blocks": 65536, 00:12:43.917 "uuid": "ca19dea0-689e-4cd9-af99-24863174da92", 00:12:43.917 "assigned_rate_limits": { 00:12:43.917 "rw_ios_per_sec": 0, 00:12:43.917 "rw_mbytes_per_sec": 0, 00:12:43.917 "r_mbytes_per_sec": 0, 00:12:43.917 "w_mbytes_per_sec": 0 00:12:43.917 }, 00:12:43.917 "claimed": true, 00:12:43.917 "claim_type": "exclusive_write", 00:12:43.917 "zoned": false, 00:12:43.917 "supported_io_types": { 00:12:43.917 "read": true, 00:12:43.917 "write": true, 00:12:43.917 "unmap": true, 00:12:43.917 "flush": true, 00:12:43.917 "reset": true, 00:12:43.917 "nvme_admin": false, 00:12:43.917 "nvme_io": false, 00:12:43.917 "nvme_io_md": false, 00:12:43.917 "write_zeroes": true, 00:12:43.917 "zcopy": true, 00:12:43.917 "get_zone_info": false, 00:12:43.917 "zone_management": false, 00:12:43.917 "zone_append": false, 00:12:43.917 "compare": false, 00:12:43.917 "compare_and_write": false, 00:12:43.917 "abort": true, 00:12:43.917 "seek_hole": false, 00:12:43.917 "seek_data": false, 00:12:43.917 "copy": true, 00:12:43.917 "nvme_iov_md": false 00:12:43.917 }, 00:12:43.917 "memory_domains": [ 00:12:43.917 { 00:12:43.917 "dma_device_id": "system", 00:12:43.917 "dma_device_type": 1 00:12:43.917 }, 00:12:43.917 { 00:12:43.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.917 "dma_device_type": 2 00:12:43.917 } 00:12:43.917 ], 00:12:43.917 "driver_specific": {} 00:12:43.917 } 00:12:43.917 ] 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.917 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.917 "name": "Existed_Raid", 00:12:43.918 "uuid": "177e13c9-9623-432e-bdc7-f6e3fe485836", 00:12:43.918 "strip_size_kb": 64, 00:12:43.918 "state": "configuring", 00:12:43.918 "raid_level": "concat", 00:12:43.918 "superblock": true, 00:12:43.918 "num_base_bdevs": 4, 00:12:43.918 "num_base_bdevs_discovered": 1, 00:12:43.918 "num_base_bdevs_operational": 4, 00:12:43.918 "base_bdevs_list": [ 00:12:43.918 { 00:12:43.918 "name": "BaseBdev1", 00:12:43.918 "uuid": "ca19dea0-689e-4cd9-af99-24863174da92", 00:12:43.918 "is_configured": true, 00:12:43.918 "data_offset": 2048, 00:12:43.918 "data_size": 63488 00:12:43.918 }, 00:12:43.918 { 00:12:43.918 "name": "BaseBdev2", 00:12:43.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.918 "is_configured": false, 00:12:43.918 "data_offset": 0, 00:12:43.918 "data_size": 0 00:12:43.918 }, 00:12:43.918 { 00:12:43.918 "name": "BaseBdev3", 00:12:43.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.918 "is_configured": false, 00:12:43.918 "data_offset": 0, 00:12:43.918 "data_size": 0 00:12:43.918 }, 00:12:43.918 { 00:12:43.918 "name": "BaseBdev4", 00:12:43.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.918 "is_configured": false, 00:12:43.918 "data_offset": 0, 00:12:43.918 "data_size": 0 00:12:43.918 } 00:12:43.918 ] 00:12:43.918 }' 00:12:43.918 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.918 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.486 [2024-11-05 11:28:43.556948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.486 [2024-11-05 11:28:43.557008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.486 [2024-11-05 11:28:43.564985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.486 [2024-11-05 11:28:43.566826] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.486 [2024-11-05 11:28:43.566866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.486 [2024-11-05 11:28:43.566876] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.486 [2024-11-05 11:28:43.566886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.486 [2024-11-05 11:28:43.566893] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:44.486 [2024-11-05 11:28:43.566902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.486 "name": "Existed_Raid", 00:12:44.486 "uuid": "9873c070-e421-4e1c-991f-1783b88c374b", 00:12:44.486 "strip_size_kb": 64, 00:12:44.486 "state": "configuring", 00:12:44.486 "raid_level": "concat", 00:12:44.486 "superblock": true, 00:12:44.486 "num_base_bdevs": 4, 00:12:44.486 "num_base_bdevs_discovered": 1, 00:12:44.486 "num_base_bdevs_operational": 4, 00:12:44.486 "base_bdevs_list": [ 00:12:44.486 { 00:12:44.486 "name": "BaseBdev1", 00:12:44.486 "uuid": "ca19dea0-689e-4cd9-af99-24863174da92", 00:12:44.486 "is_configured": true, 00:12:44.486 "data_offset": 2048, 00:12:44.486 "data_size": 63488 00:12:44.486 }, 00:12:44.486 { 00:12:44.486 "name": "BaseBdev2", 00:12:44.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.486 "is_configured": false, 00:12:44.486 "data_offset": 0, 00:12:44.486 "data_size": 0 00:12:44.486 }, 00:12:44.486 { 00:12:44.486 "name": "BaseBdev3", 00:12:44.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.486 "is_configured": false, 00:12:44.486 "data_offset": 0, 00:12:44.486 "data_size": 0 00:12:44.486 }, 00:12:44.486 { 00:12:44.486 "name": "BaseBdev4", 00:12:44.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.486 "is_configured": false, 00:12:44.486 "data_offset": 0, 00:12:44.486 "data_size": 0 00:12:44.486 } 00:12:44.486 ] 00:12:44.486 }' 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.486 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.745 11:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:44.745 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.745 11:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.745 [2024-11-05 11:28:44.006445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.745 BaseBdev2 00:12:44.745 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.745 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:44.745 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:44.745 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:44.745 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:44.745 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:44.745 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:44.745 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:44.745 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.745 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.745 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.745 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:44.745 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.004 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.004 [ 00:12:45.004 { 00:12:45.004 "name": "BaseBdev2", 00:12:45.004 "aliases": [ 00:12:45.004 "7482bbb3-3007-496b-a354-8f8902cd3a18" 00:12:45.004 ], 00:12:45.004 "product_name": "Malloc disk", 00:12:45.004 "block_size": 512, 00:12:45.004 "num_blocks": 65536, 00:12:45.004 "uuid": "7482bbb3-3007-496b-a354-8f8902cd3a18", 00:12:45.004 "assigned_rate_limits": { 00:12:45.004 "rw_ios_per_sec": 0, 00:12:45.004 "rw_mbytes_per_sec": 0, 00:12:45.004 "r_mbytes_per_sec": 0, 00:12:45.004 "w_mbytes_per_sec": 0 00:12:45.004 }, 00:12:45.004 "claimed": true, 00:12:45.004 "claim_type": "exclusive_write", 00:12:45.004 "zoned": false, 00:12:45.004 "supported_io_types": { 00:12:45.004 "read": true, 00:12:45.004 "write": true, 00:12:45.004 "unmap": true, 00:12:45.004 "flush": true, 00:12:45.004 "reset": true, 00:12:45.004 "nvme_admin": false, 00:12:45.004 "nvme_io": false, 00:12:45.004 "nvme_io_md": false, 00:12:45.004 "write_zeroes": true, 00:12:45.004 "zcopy": true, 00:12:45.004 "get_zone_info": false, 00:12:45.004 "zone_management": false, 00:12:45.004 "zone_append": false, 00:12:45.004 "compare": false, 00:12:45.004 "compare_and_write": false, 00:12:45.004 "abort": true, 00:12:45.004 "seek_hole": false, 00:12:45.004 "seek_data": false, 00:12:45.004 "copy": true, 00:12:45.004 "nvme_iov_md": false 00:12:45.004 }, 00:12:45.004 "memory_domains": [ 00:12:45.004 { 00:12:45.004 "dma_device_id": "system", 00:12:45.004 "dma_device_type": 1 00:12:45.004 }, 00:12:45.004 { 00:12:45.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.004 "dma_device_type": 2 00:12:45.004 } 00:12:45.004 ], 00:12:45.004 "driver_specific": {} 00:12:45.004 } 00:12:45.004 ] 00:12:45.004 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.004 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:45.004 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:45.004 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:45.004 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.005 "name": "Existed_Raid", 00:12:45.005 "uuid": "9873c070-e421-4e1c-991f-1783b88c374b", 00:12:45.005 "strip_size_kb": 64, 00:12:45.005 "state": "configuring", 00:12:45.005 "raid_level": "concat", 00:12:45.005 "superblock": true, 00:12:45.005 "num_base_bdevs": 4, 00:12:45.005 "num_base_bdevs_discovered": 2, 00:12:45.005 "num_base_bdevs_operational": 4, 00:12:45.005 "base_bdevs_list": [ 00:12:45.005 { 00:12:45.005 "name": "BaseBdev1", 00:12:45.005 "uuid": "ca19dea0-689e-4cd9-af99-24863174da92", 00:12:45.005 "is_configured": true, 00:12:45.005 "data_offset": 2048, 00:12:45.005 "data_size": 63488 00:12:45.005 }, 00:12:45.005 { 00:12:45.005 "name": "BaseBdev2", 00:12:45.005 "uuid": "7482bbb3-3007-496b-a354-8f8902cd3a18", 00:12:45.005 "is_configured": true, 00:12:45.005 "data_offset": 2048, 00:12:45.005 "data_size": 63488 00:12:45.005 }, 00:12:45.005 { 00:12:45.005 "name": "BaseBdev3", 00:12:45.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.005 "is_configured": false, 00:12:45.005 "data_offset": 0, 00:12:45.005 "data_size": 0 00:12:45.005 }, 00:12:45.005 { 00:12:45.005 "name": "BaseBdev4", 00:12:45.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.005 "is_configured": false, 00:12:45.005 "data_offset": 0, 00:12:45.005 "data_size": 0 00:12:45.005 } 00:12:45.005 ] 00:12:45.005 }' 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.005 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.264 [2024-11-05 11:28:44.502081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:45.264 BaseBdev3 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.264 [ 00:12:45.264 { 00:12:45.264 "name": "BaseBdev3", 00:12:45.264 "aliases": [ 00:12:45.264 "16f7004e-b3f4-4171-bebd-71f32607c51e" 00:12:45.264 ], 00:12:45.264 "product_name": "Malloc disk", 00:12:45.264 "block_size": 512, 00:12:45.264 "num_blocks": 65536, 00:12:45.264 "uuid": "16f7004e-b3f4-4171-bebd-71f32607c51e", 00:12:45.264 "assigned_rate_limits": { 00:12:45.264 "rw_ios_per_sec": 0, 00:12:45.264 "rw_mbytes_per_sec": 0, 00:12:45.264 "r_mbytes_per_sec": 0, 00:12:45.264 "w_mbytes_per_sec": 0 00:12:45.264 }, 00:12:45.264 "claimed": true, 00:12:45.264 "claim_type": "exclusive_write", 00:12:45.264 "zoned": false, 00:12:45.264 "supported_io_types": { 00:12:45.264 "read": true, 00:12:45.264 "write": true, 00:12:45.264 "unmap": true, 00:12:45.264 "flush": true, 00:12:45.264 "reset": true, 00:12:45.264 "nvme_admin": false, 00:12:45.264 "nvme_io": false, 00:12:45.264 "nvme_io_md": false, 00:12:45.264 "write_zeroes": true, 00:12:45.264 "zcopy": true, 00:12:45.264 "get_zone_info": false, 00:12:45.264 "zone_management": false, 00:12:45.264 "zone_append": false, 00:12:45.264 "compare": false, 00:12:45.264 "compare_and_write": false, 00:12:45.264 "abort": true, 00:12:45.264 "seek_hole": false, 00:12:45.264 "seek_data": false, 00:12:45.264 "copy": true, 00:12:45.264 "nvme_iov_md": false 00:12:45.264 }, 00:12:45.264 "memory_domains": [ 00:12:45.264 { 00:12:45.264 "dma_device_id": "system", 00:12:45.264 "dma_device_type": 1 00:12:45.264 }, 00:12:45.264 { 00:12:45.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.264 "dma_device_type": 2 00:12:45.264 } 00:12:45.264 ], 00:12:45.264 "driver_specific": {} 00:12:45.264 } 00:12:45.264 ] 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:45.264 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.522 "name": "Existed_Raid", 00:12:45.522 "uuid": "9873c070-e421-4e1c-991f-1783b88c374b", 00:12:45.522 "strip_size_kb": 64, 00:12:45.522 "state": "configuring", 00:12:45.522 "raid_level": "concat", 00:12:45.522 "superblock": true, 00:12:45.522 "num_base_bdevs": 4, 00:12:45.522 "num_base_bdevs_discovered": 3, 00:12:45.522 "num_base_bdevs_operational": 4, 00:12:45.522 "base_bdevs_list": [ 00:12:45.522 { 00:12:45.522 "name": "BaseBdev1", 00:12:45.522 "uuid": "ca19dea0-689e-4cd9-af99-24863174da92", 00:12:45.522 "is_configured": true, 00:12:45.522 "data_offset": 2048, 00:12:45.522 "data_size": 63488 00:12:45.522 }, 00:12:45.522 { 00:12:45.522 "name": "BaseBdev2", 00:12:45.522 "uuid": "7482bbb3-3007-496b-a354-8f8902cd3a18", 00:12:45.522 "is_configured": true, 00:12:45.522 "data_offset": 2048, 00:12:45.522 "data_size": 63488 00:12:45.522 }, 00:12:45.522 { 00:12:45.522 "name": "BaseBdev3", 00:12:45.522 "uuid": "16f7004e-b3f4-4171-bebd-71f32607c51e", 00:12:45.522 "is_configured": true, 00:12:45.522 "data_offset": 2048, 00:12:45.522 "data_size": 63488 00:12:45.522 }, 00:12:45.522 { 00:12:45.522 "name": "BaseBdev4", 00:12:45.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.522 "is_configured": false, 00:12:45.522 "data_offset": 0, 00:12:45.522 "data_size": 0 00:12:45.522 } 00:12:45.522 ] 00:12:45.522 }' 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.522 11:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.780 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:45.780 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.780 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.038 [2024-11-05 11:28:45.062785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:46.038 [2024-11-05 11:28:45.063080] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:46.038 [2024-11-05 11:28:45.063098] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:46.038 [2024-11-05 11:28:45.063430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:46.039 [2024-11-05 11:28:45.063612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:46.039 [2024-11-05 11:28:45.063636] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:46.039 BaseBdev4 00:12:46.039 [2024-11-05 11:28:45.063805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.039 [ 00:12:46.039 { 00:12:46.039 "name": "BaseBdev4", 00:12:46.039 "aliases": [ 00:12:46.039 "e49bccee-f6db-4c88-9b40-6290f81214eb" 00:12:46.039 ], 00:12:46.039 "product_name": "Malloc disk", 00:12:46.039 "block_size": 512, 00:12:46.039 "num_blocks": 65536, 00:12:46.039 "uuid": "e49bccee-f6db-4c88-9b40-6290f81214eb", 00:12:46.039 "assigned_rate_limits": { 00:12:46.039 "rw_ios_per_sec": 0, 00:12:46.039 "rw_mbytes_per_sec": 0, 00:12:46.039 "r_mbytes_per_sec": 0, 00:12:46.039 "w_mbytes_per_sec": 0 00:12:46.039 }, 00:12:46.039 "claimed": true, 00:12:46.039 "claim_type": "exclusive_write", 00:12:46.039 "zoned": false, 00:12:46.039 "supported_io_types": { 00:12:46.039 "read": true, 00:12:46.039 "write": true, 00:12:46.039 "unmap": true, 00:12:46.039 "flush": true, 00:12:46.039 "reset": true, 00:12:46.039 "nvme_admin": false, 00:12:46.039 "nvme_io": false, 00:12:46.039 "nvme_io_md": false, 00:12:46.039 "write_zeroes": true, 00:12:46.039 "zcopy": true, 00:12:46.039 "get_zone_info": false, 00:12:46.039 "zone_management": false, 00:12:46.039 "zone_append": false, 00:12:46.039 "compare": false, 00:12:46.039 "compare_and_write": false, 00:12:46.039 "abort": true, 00:12:46.039 "seek_hole": false, 00:12:46.039 "seek_data": false, 00:12:46.039 "copy": true, 00:12:46.039 "nvme_iov_md": false 00:12:46.039 }, 00:12:46.039 "memory_domains": [ 00:12:46.039 { 00:12:46.039 "dma_device_id": "system", 00:12:46.039 "dma_device_type": 1 00:12:46.039 }, 00:12:46.039 { 00:12:46.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.039 "dma_device_type": 2 00:12:46.039 } 00:12:46.039 ], 00:12:46.039 "driver_specific": {} 00:12:46.039 } 00:12:46.039 ] 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.039 "name": "Existed_Raid", 00:12:46.039 "uuid": "9873c070-e421-4e1c-991f-1783b88c374b", 00:12:46.039 "strip_size_kb": 64, 00:12:46.039 "state": "online", 00:12:46.039 "raid_level": "concat", 00:12:46.039 "superblock": true, 00:12:46.039 "num_base_bdevs": 4, 00:12:46.039 "num_base_bdevs_discovered": 4, 00:12:46.039 "num_base_bdevs_operational": 4, 00:12:46.039 "base_bdevs_list": [ 00:12:46.039 { 00:12:46.039 "name": "BaseBdev1", 00:12:46.039 "uuid": "ca19dea0-689e-4cd9-af99-24863174da92", 00:12:46.039 "is_configured": true, 00:12:46.039 "data_offset": 2048, 00:12:46.039 "data_size": 63488 00:12:46.039 }, 00:12:46.039 { 00:12:46.039 "name": "BaseBdev2", 00:12:46.039 "uuid": "7482bbb3-3007-496b-a354-8f8902cd3a18", 00:12:46.039 "is_configured": true, 00:12:46.039 "data_offset": 2048, 00:12:46.039 "data_size": 63488 00:12:46.039 }, 00:12:46.039 { 00:12:46.039 "name": "BaseBdev3", 00:12:46.039 "uuid": "16f7004e-b3f4-4171-bebd-71f32607c51e", 00:12:46.039 "is_configured": true, 00:12:46.039 "data_offset": 2048, 00:12:46.039 "data_size": 63488 00:12:46.039 }, 00:12:46.039 { 00:12:46.039 "name": "BaseBdev4", 00:12:46.039 "uuid": "e49bccee-f6db-4c88-9b40-6290f81214eb", 00:12:46.039 "is_configured": true, 00:12:46.039 "data_offset": 2048, 00:12:46.039 "data_size": 63488 00:12:46.039 } 00:12:46.039 ] 00:12:46.039 }' 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.039 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.298 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:46.298 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:46.298 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:46.298 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:46.298 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:46.298 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:46.298 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:46.298 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:46.298 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.298 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.557 [2024-11-05 11:28:45.578317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:46.557 "name": "Existed_Raid", 00:12:46.557 "aliases": [ 00:12:46.557 "9873c070-e421-4e1c-991f-1783b88c374b" 00:12:46.557 ], 00:12:46.557 "product_name": "Raid Volume", 00:12:46.557 "block_size": 512, 00:12:46.557 "num_blocks": 253952, 00:12:46.557 "uuid": "9873c070-e421-4e1c-991f-1783b88c374b", 00:12:46.557 "assigned_rate_limits": { 00:12:46.557 "rw_ios_per_sec": 0, 00:12:46.557 "rw_mbytes_per_sec": 0, 00:12:46.557 "r_mbytes_per_sec": 0, 00:12:46.557 "w_mbytes_per_sec": 0 00:12:46.557 }, 00:12:46.557 "claimed": false, 00:12:46.557 "zoned": false, 00:12:46.557 "supported_io_types": { 00:12:46.557 "read": true, 00:12:46.557 "write": true, 00:12:46.557 "unmap": true, 00:12:46.557 "flush": true, 00:12:46.557 "reset": true, 00:12:46.557 "nvme_admin": false, 00:12:46.557 "nvme_io": false, 00:12:46.557 "nvme_io_md": false, 00:12:46.557 "write_zeroes": true, 00:12:46.557 "zcopy": false, 00:12:46.557 "get_zone_info": false, 00:12:46.557 "zone_management": false, 00:12:46.557 "zone_append": false, 00:12:46.557 "compare": false, 00:12:46.557 "compare_and_write": false, 00:12:46.557 "abort": false, 00:12:46.557 "seek_hole": false, 00:12:46.557 "seek_data": false, 00:12:46.557 "copy": false, 00:12:46.557 "nvme_iov_md": false 00:12:46.557 }, 00:12:46.557 "memory_domains": [ 00:12:46.557 { 00:12:46.557 "dma_device_id": "system", 00:12:46.557 "dma_device_type": 1 00:12:46.557 }, 00:12:46.557 { 00:12:46.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.557 "dma_device_type": 2 00:12:46.557 }, 00:12:46.557 { 00:12:46.557 "dma_device_id": "system", 00:12:46.557 "dma_device_type": 1 00:12:46.557 }, 00:12:46.557 { 00:12:46.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.557 "dma_device_type": 2 00:12:46.557 }, 00:12:46.557 { 00:12:46.557 "dma_device_id": "system", 00:12:46.557 "dma_device_type": 1 00:12:46.557 }, 00:12:46.557 { 00:12:46.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.557 "dma_device_type": 2 00:12:46.557 }, 00:12:46.557 { 00:12:46.557 "dma_device_id": "system", 00:12:46.557 "dma_device_type": 1 00:12:46.557 }, 00:12:46.557 { 00:12:46.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.557 "dma_device_type": 2 00:12:46.557 } 00:12:46.557 ], 00:12:46.557 "driver_specific": { 00:12:46.557 "raid": { 00:12:46.557 "uuid": "9873c070-e421-4e1c-991f-1783b88c374b", 00:12:46.557 "strip_size_kb": 64, 00:12:46.557 "state": "online", 00:12:46.557 "raid_level": "concat", 00:12:46.557 "superblock": true, 00:12:46.557 "num_base_bdevs": 4, 00:12:46.557 "num_base_bdevs_discovered": 4, 00:12:46.557 "num_base_bdevs_operational": 4, 00:12:46.557 "base_bdevs_list": [ 00:12:46.557 { 00:12:46.557 "name": "BaseBdev1", 00:12:46.557 "uuid": "ca19dea0-689e-4cd9-af99-24863174da92", 00:12:46.557 "is_configured": true, 00:12:46.557 "data_offset": 2048, 00:12:46.557 "data_size": 63488 00:12:46.557 }, 00:12:46.557 { 00:12:46.557 "name": "BaseBdev2", 00:12:46.557 "uuid": "7482bbb3-3007-496b-a354-8f8902cd3a18", 00:12:46.557 "is_configured": true, 00:12:46.557 "data_offset": 2048, 00:12:46.557 "data_size": 63488 00:12:46.557 }, 00:12:46.557 { 00:12:46.557 "name": "BaseBdev3", 00:12:46.557 "uuid": "16f7004e-b3f4-4171-bebd-71f32607c51e", 00:12:46.557 "is_configured": true, 00:12:46.557 "data_offset": 2048, 00:12:46.557 "data_size": 63488 00:12:46.557 }, 00:12:46.557 { 00:12:46.557 "name": "BaseBdev4", 00:12:46.557 "uuid": "e49bccee-f6db-4c88-9b40-6290f81214eb", 00:12:46.557 "is_configured": true, 00:12:46.557 "data_offset": 2048, 00:12:46.557 "data_size": 63488 00:12:46.557 } 00:12:46.557 ] 00:12:46.557 } 00:12:46.557 } 00:12:46.557 }' 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:46.557 BaseBdev2 00:12:46.557 BaseBdev3 00:12:46.557 BaseBdev4' 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.557 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.817 [2024-11-05 11:28:45.901539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:46.817 [2024-11-05 11:28:45.901582] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.817 [2024-11-05 11:28:45.901634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.817 11:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.817 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.817 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.817 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.817 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.817 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.817 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.817 "name": "Existed_Raid", 00:12:46.817 "uuid": "9873c070-e421-4e1c-991f-1783b88c374b", 00:12:46.817 "strip_size_kb": 64, 00:12:46.817 "state": "offline", 00:12:46.817 "raid_level": "concat", 00:12:46.817 "superblock": true, 00:12:46.817 "num_base_bdevs": 4, 00:12:46.817 "num_base_bdevs_discovered": 3, 00:12:46.817 "num_base_bdevs_operational": 3, 00:12:46.817 "base_bdevs_list": [ 00:12:46.817 { 00:12:46.817 "name": null, 00:12:46.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.817 "is_configured": false, 00:12:46.817 "data_offset": 0, 00:12:46.817 "data_size": 63488 00:12:46.817 }, 00:12:46.817 { 00:12:46.817 "name": "BaseBdev2", 00:12:46.817 "uuid": "7482bbb3-3007-496b-a354-8f8902cd3a18", 00:12:46.817 "is_configured": true, 00:12:46.817 "data_offset": 2048, 00:12:46.817 "data_size": 63488 00:12:46.817 }, 00:12:46.817 { 00:12:46.817 "name": "BaseBdev3", 00:12:46.817 "uuid": "16f7004e-b3f4-4171-bebd-71f32607c51e", 00:12:46.817 "is_configured": true, 00:12:46.817 "data_offset": 2048, 00:12:46.817 "data_size": 63488 00:12:46.817 }, 00:12:46.817 { 00:12:46.817 "name": "BaseBdev4", 00:12:46.817 "uuid": "e49bccee-f6db-4c88-9b40-6290f81214eb", 00:12:46.817 "is_configured": true, 00:12:46.817 "data_offset": 2048, 00:12:46.817 "data_size": 63488 00:12:46.817 } 00:12:46.817 ] 00:12:46.817 }' 00:12:46.817 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.817 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.384 [2024-11-05 11:28:46.453305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.384 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.385 [2024-11-05 11:28:46.604488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.643 [2024-11-05 11:28:46.742378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:47.643 [2024-11-05 11:28:46.742432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.643 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.904 BaseBdev2 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.904 [ 00:12:47.904 { 00:12:47.904 "name": "BaseBdev2", 00:12:47.904 "aliases": [ 00:12:47.904 "99971e0d-9922-48f6-bc01-908060724a8a" 00:12:47.904 ], 00:12:47.904 "product_name": "Malloc disk", 00:12:47.904 "block_size": 512, 00:12:47.904 "num_blocks": 65536, 00:12:47.904 "uuid": "99971e0d-9922-48f6-bc01-908060724a8a", 00:12:47.904 "assigned_rate_limits": { 00:12:47.904 "rw_ios_per_sec": 0, 00:12:47.904 "rw_mbytes_per_sec": 0, 00:12:47.904 "r_mbytes_per_sec": 0, 00:12:47.904 "w_mbytes_per_sec": 0 00:12:47.904 }, 00:12:47.904 "claimed": false, 00:12:47.904 "zoned": false, 00:12:47.904 "supported_io_types": { 00:12:47.904 "read": true, 00:12:47.904 "write": true, 00:12:47.904 "unmap": true, 00:12:47.904 "flush": true, 00:12:47.904 "reset": true, 00:12:47.904 "nvme_admin": false, 00:12:47.904 "nvme_io": false, 00:12:47.904 "nvme_io_md": false, 00:12:47.904 "write_zeroes": true, 00:12:47.904 "zcopy": true, 00:12:47.904 "get_zone_info": false, 00:12:47.904 "zone_management": false, 00:12:47.904 "zone_append": false, 00:12:47.904 "compare": false, 00:12:47.904 "compare_and_write": false, 00:12:47.904 "abort": true, 00:12:47.904 "seek_hole": false, 00:12:47.904 "seek_data": false, 00:12:47.904 "copy": true, 00:12:47.904 "nvme_iov_md": false 00:12:47.904 }, 00:12:47.904 "memory_domains": [ 00:12:47.904 { 00:12:47.904 "dma_device_id": "system", 00:12:47.904 "dma_device_type": 1 00:12:47.904 }, 00:12:47.904 { 00:12:47.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.904 "dma_device_type": 2 00:12:47.904 } 00:12:47.904 ], 00:12:47.904 "driver_specific": {} 00:12:47.904 } 00:12:47.904 ] 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.904 BaseBdev3 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.904 [ 00:12:47.904 { 00:12:47.904 "name": "BaseBdev3", 00:12:47.904 "aliases": [ 00:12:47.904 "57224bcc-7427-40bd-acfe-c857159c6015" 00:12:47.904 ], 00:12:47.904 "product_name": "Malloc disk", 00:12:47.904 "block_size": 512, 00:12:47.904 "num_blocks": 65536, 00:12:47.904 "uuid": "57224bcc-7427-40bd-acfe-c857159c6015", 00:12:47.904 "assigned_rate_limits": { 00:12:47.904 "rw_ios_per_sec": 0, 00:12:47.904 "rw_mbytes_per_sec": 0, 00:12:47.904 "r_mbytes_per_sec": 0, 00:12:47.904 "w_mbytes_per_sec": 0 00:12:47.904 }, 00:12:47.904 "claimed": false, 00:12:47.904 "zoned": false, 00:12:47.904 "supported_io_types": { 00:12:47.904 "read": true, 00:12:47.904 "write": true, 00:12:47.904 "unmap": true, 00:12:47.904 "flush": true, 00:12:47.904 "reset": true, 00:12:47.904 "nvme_admin": false, 00:12:47.904 "nvme_io": false, 00:12:47.904 "nvme_io_md": false, 00:12:47.904 "write_zeroes": true, 00:12:47.904 "zcopy": true, 00:12:47.904 "get_zone_info": false, 00:12:47.904 "zone_management": false, 00:12:47.904 "zone_append": false, 00:12:47.904 "compare": false, 00:12:47.904 "compare_and_write": false, 00:12:47.904 "abort": true, 00:12:47.904 "seek_hole": false, 00:12:47.904 "seek_data": false, 00:12:47.904 "copy": true, 00:12:47.904 "nvme_iov_md": false 00:12:47.904 }, 00:12:47.904 "memory_domains": [ 00:12:47.904 { 00:12:47.904 "dma_device_id": "system", 00:12:47.904 "dma_device_type": 1 00:12:47.904 }, 00:12:47.904 { 00:12:47.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.904 "dma_device_type": 2 00:12:47.904 } 00:12:47.904 ], 00:12:47.904 "driver_specific": {} 00:12:47.904 } 00:12:47.904 ] 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:47.904 11:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.904 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.904 BaseBdev4 00:12:47.904 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.904 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:47.904 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:47.904 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:47.904 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:47.904 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:47.904 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:47.904 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:47.904 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.904 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.904 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.904 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:47.904 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.904 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.904 [ 00:12:47.904 { 00:12:47.905 "name": "BaseBdev4", 00:12:47.905 "aliases": [ 00:12:47.905 "d7074667-bf2b-4b28-aad0-cc1e04a44bd6" 00:12:47.905 ], 00:12:47.905 "product_name": "Malloc disk", 00:12:47.905 "block_size": 512, 00:12:47.905 "num_blocks": 65536, 00:12:47.905 "uuid": "d7074667-bf2b-4b28-aad0-cc1e04a44bd6", 00:12:47.905 "assigned_rate_limits": { 00:12:47.905 "rw_ios_per_sec": 0, 00:12:47.905 "rw_mbytes_per_sec": 0, 00:12:47.905 "r_mbytes_per_sec": 0, 00:12:47.905 "w_mbytes_per_sec": 0 00:12:47.905 }, 00:12:47.905 "claimed": false, 00:12:47.905 "zoned": false, 00:12:47.905 "supported_io_types": { 00:12:47.905 "read": true, 00:12:47.905 "write": true, 00:12:47.905 "unmap": true, 00:12:47.905 "flush": true, 00:12:47.905 "reset": true, 00:12:47.905 "nvme_admin": false, 00:12:47.905 "nvme_io": false, 00:12:47.905 "nvme_io_md": false, 00:12:47.905 "write_zeroes": true, 00:12:47.905 "zcopy": true, 00:12:47.905 "get_zone_info": false, 00:12:47.905 "zone_management": false, 00:12:47.905 "zone_append": false, 00:12:47.905 "compare": false, 00:12:47.905 "compare_and_write": false, 00:12:47.905 "abort": true, 00:12:47.905 "seek_hole": false, 00:12:47.905 "seek_data": false, 00:12:47.905 "copy": true, 00:12:47.905 "nvme_iov_md": false 00:12:47.905 }, 00:12:47.905 "memory_domains": [ 00:12:47.905 { 00:12:47.905 "dma_device_id": "system", 00:12:47.905 "dma_device_type": 1 00:12:47.905 }, 00:12:47.905 { 00:12:47.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.905 "dma_device_type": 2 00:12:47.905 } 00:12:47.905 ], 00:12:47.905 "driver_specific": {} 00:12:47.905 } 00:12:47.905 ] 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.905 [2024-11-05 11:28:47.067587] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.905 [2024-11-05 11:28:47.067634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.905 [2024-11-05 11:28:47.067655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.905 [2024-11-05 11:28:47.069623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:47.905 [2024-11-05 11:28:47.069678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.905 "name": "Existed_Raid", 00:12:47.905 "uuid": "d5c7ca0e-8419-4791-9f64-d123db5dc07a", 00:12:47.905 "strip_size_kb": 64, 00:12:47.905 "state": "configuring", 00:12:47.905 "raid_level": "concat", 00:12:47.905 "superblock": true, 00:12:47.905 "num_base_bdevs": 4, 00:12:47.905 "num_base_bdevs_discovered": 3, 00:12:47.905 "num_base_bdevs_operational": 4, 00:12:47.905 "base_bdevs_list": [ 00:12:47.905 { 00:12:47.905 "name": "BaseBdev1", 00:12:47.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.905 "is_configured": false, 00:12:47.905 "data_offset": 0, 00:12:47.905 "data_size": 0 00:12:47.905 }, 00:12:47.905 { 00:12:47.905 "name": "BaseBdev2", 00:12:47.905 "uuid": "99971e0d-9922-48f6-bc01-908060724a8a", 00:12:47.905 "is_configured": true, 00:12:47.905 "data_offset": 2048, 00:12:47.905 "data_size": 63488 00:12:47.905 }, 00:12:47.905 { 00:12:47.905 "name": "BaseBdev3", 00:12:47.905 "uuid": "57224bcc-7427-40bd-acfe-c857159c6015", 00:12:47.905 "is_configured": true, 00:12:47.905 "data_offset": 2048, 00:12:47.905 "data_size": 63488 00:12:47.905 }, 00:12:47.905 { 00:12:47.905 "name": "BaseBdev4", 00:12:47.905 "uuid": "d7074667-bf2b-4b28-aad0-cc1e04a44bd6", 00:12:47.905 "is_configured": true, 00:12:47.905 "data_offset": 2048, 00:12:47.905 "data_size": 63488 00:12:47.905 } 00:12:47.905 ] 00:12:47.905 }' 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.905 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.473 [2024-11-05 11:28:47.510916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.473 "name": "Existed_Raid", 00:12:48.473 "uuid": "d5c7ca0e-8419-4791-9f64-d123db5dc07a", 00:12:48.473 "strip_size_kb": 64, 00:12:48.473 "state": "configuring", 00:12:48.473 "raid_level": "concat", 00:12:48.473 "superblock": true, 00:12:48.473 "num_base_bdevs": 4, 00:12:48.473 "num_base_bdevs_discovered": 2, 00:12:48.473 "num_base_bdevs_operational": 4, 00:12:48.473 "base_bdevs_list": [ 00:12:48.473 { 00:12:48.473 "name": "BaseBdev1", 00:12:48.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.473 "is_configured": false, 00:12:48.473 "data_offset": 0, 00:12:48.473 "data_size": 0 00:12:48.473 }, 00:12:48.473 { 00:12:48.473 "name": null, 00:12:48.473 "uuid": "99971e0d-9922-48f6-bc01-908060724a8a", 00:12:48.473 "is_configured": false, 00:12:48.473 "data_offset": 0, 00:12:48.473 "data_size": 63488 00:12:48.473 }, 00:12:48.473 { 00:12:48.473 "name": "BaseBdev3", 00:12:48.473 "uuid": "57224bcc-7427-40bd-acfe-c857159c6015", 00:12:48.473 "is_configured": true, 00:12:48.473 "data_offset": 2048, 00:12:48.473 "data_size": 63488 00:12:48.473 }, 00:12:48.473 { 00:12:48.473 "name": "BaseBdev4", 00:12:48.473 "uuid": "d7074667-bf2b-4b28-aad0-cc1e04a44bd6", 00:12:48.473 "is_configured": true, 00:12:48.473 "data_offset": 2048, 00:12:48.473 "data_size": 63488 00:12:48.473 } 00:12:48.473 ] 00:12:48.473 }' 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.473 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.732 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:48.732 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.732 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.732 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.732 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.997 [2024-11-05 11:28:48.054458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.997 BaseBdev1 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.997 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.997 [ 00:12:48.997 { 00:12:48.997 "name": "BaseBdev1", 00:12:48.997 "aliases": [ 00:12:48.997 "7ba2792f-fe18-4493-b2d8-b147cc675bb2" 00:12:48.997 ], 00:12:48.998 "product_name": "Malloc disk", 00:12:48.998 "block_size": 512, 00:12:48.998 "num_blocks": 65536, 00:12:48.998 "uuid": "7ba2792f-fe18-4493-b2d8-b147cc675bb2", 00:12:48.998 "assigned_rate_limits": { 00:12:48.998 "rw_ios_per_sec": 0, 00:12:48.998 "rw_mbytes_per_sec": 0, 00:12:48.998 "r_mbytes_per_sec": 0, 00:12:48.998 "w_mbytes_per_sec": 0 00:12:48.998 }, 00:12:48.998 "claimed": true, 00:12:48.998 "claim_type": "exclusive_write", 00:12:48.998 "zoned": false, 00:12:48.998 "supported_io_types": { 00:12:48.998 "read": true, 00:12:48.998 "write": true, 00:12:48.998 "unmap": true, 00:12:48.998 "flush": true, 00:12:48.998 "reset": true, 00:12:48.998 "nvme_admin": false, 00:12:48.998 "nvme_io": false, 00:12:48.998 "nvme_io_md": false, 00:12:48.998 "write_zeroes": true, 00:12:48.998 "zcopy": true, 00:12:48.998 "get_zone_info": false, 00:12:48.998 "zone_management": false, 00:12:48.998 "zone_append": false, 00:12:48.998 "compare": false, 00:12:48.998 "compare_and_write": false, 00:12:48.998 "abort": true, 00:12:48.998 "seek_hole": false, 00:12:48.998 "seek_data": false, 00:12:48.998 "copy": true, 00:12:48.998 "nvme_iov_md": false 00:12:48.998 }, 00:12:48.998 "memory_domains": [ 00:12:48.998 { 00:12:48.998 "dma_device_id": "system", 00:12:48.998 "dma_device_type": 1 00:12:48.998 }, 00:12:48.998 { 00:12:48.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.998 "dma_device_type": 2 00:12:48.998 } 00:12:48.998 ], 00:12:48.998 "driver_specific": {} 00:12:48.998 } 00:12:48.998 ] 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.998 "name": "Existed_Raid", 00:12:48.998 "uuid": "d5c7ca0e-8419-4791-9f64-d123db5dc07a", 00:12:48.998 "strip_size_kb": 64, 00:12:48.998 "state": "configuring", 00:12:48.998 "raid_level": "concat", 00:12:48.998 "superblock": true, 00:12:48.998 "num_base_bdevs": 4, 00:12:48.998 "num_base_bdevs_discovered": 3, 00:12:48.998 "num_base_bdevs_operational": 4, 00:12:48.998 "base_bdevs_list": [ 00:12:48.998 { 00:12:48.998 "name": "BaseBdev1", 00:12:48.998 "uuid": "7ba2792f-fe18-4493-b2d8-b147cc675bb2", 00:12:48.998 "is_configured": true, 00:12:48.998 "data_offset": 2048, 00:12:48.998 "data_size": 63488 00:12:48.998 }, 00:12:48.998 { 00:12:48.998 "name": null, 00:12:48.998 "uuid": "99971e0d-9922-48f6-bc01-908060724a8a", 00:12:48.998 "is_configured": false, 00:12:48.998 "data_offset": 0, 00:12:48.998 "data_size": 63488 00:12:48.998 }, 00:12:48.998 { 00:12:48.998 "name": "BaseBdev3", 00:12:48.998 "uuid": "57224bcc-7427-40bd-acfe-c857159c6015", 00:12:48.998 "is_configured": true, 00:12:48.998 "data_offset": 2048, 00:12:48.998 "data_size": 63488 00:12:48.998 }, 00:12:48.998 { 00:12:48.998 "name": "BaseBdev4", 00:12:48.998 "uuid": "d7074667-bf2b-4b28-aad0-cc1e04a44bd6", 00:12:48.998 "is_configured": true, 00:12:48.998 "data_offset": 2048, 00:12:48.998 "data_size": 63488 00:12:48.998 } 00:12:48.998 ] 00:12:48.998 }' 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.998 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.267 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.267 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:49.267 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.267 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.526 [2024-11-05 11:28:48.589634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.526 "name": "Existed_Raid", 00:12:49.526 "uuid": "d5c7ca0e-8419-4791-9f64-d123db5dc07a", 00:12:49.526 "strip_size_kb": 64, 00:12:49.526 "state": "configuring", 00:12:49.526 "raid_level": "concat", 00:12:49.526 "superblock": true, 00:12:49.526 "num_base_bdevs": 4, 00:12:49.526 "num_base_bdevs_discovered": 2, 00:12:49.526 "num_base_bdevs_operational": 4, 00:12:49.526 "base_bdevs_list": [ 00:12:49.526 { 00:12:49.526 "name": "BaseBdev1", 00:12:49.526 "uuid": "7ba2792f-fe18-4493-b2d8-b147cc675bb2", 00:12:49.526 "is_configured": true, 00:12:49.526 "data_offset": 2048, 00:12:49.526 "data_size": 63488 00:12:49.526 }, 00:12:49.526 { 00:12:49.526 "name": null, 00:12:49.526 "uuid": "99971e0d-9922-48f6-bc01-908060724a8a", 00:12:49.526 "is_configured": false, 00:12:49.526 "data_offset": 0, 00:12:49.526 "data_size": 63488 00:12:49.526 }, 00:12:49.526 { 00:12:49.526 "name": null, 00:12:49.526 "uuid": "57224bcc-7427-40bd-acfe-c857159c6015", 00:12:49.526 "is_configured": false, 00:12:49.526 "data_offset": 0, 00:12:49.526 "data_size": 63488 00:12:49.526 }, 00:12:49.526 { 00:12:49.526 "name": "BaseBdev4", 00:12:49.526 "uuid": "d7074667-bf2b-4b28-aad0-cc1e04a44bd6", 00:12:49.526 "is_configured": true, 00:12:49.526 "data_offset": 2048, 00:12:49.526 "data_size": 63488 00:12:49.526 } 00:12:49.526 ] 00:12:49.526 }' 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.526 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.784 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:49.784 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.784 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.784 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.042 [2024-11-05 11:28:49.088777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.042 "name": "Existed_Raid", 00:12:50.042 "uuid": "d5c7ca0e-8419-4791-9f64-d123db5dc07a", 00:12:50.042 "strip_size_kb": 64, 00:12:50.042 "state": "configuring", 00:12:50.042 "raid_level": "concat", 00:12:50.042 "superblock": true, 00:12:50.042 "num_base_bdevs": 4, 00:12:50.042 "num_base_bdevs_discovered": 3, 00:12:50.042 "num_base_bdevs_operational": 4, 00:12:50.042 "base_bdevs_list": [ 00:12:50.042 { 00:12:50.042 "name": "BaseBdev1", 00:12:50.042 "uuid": "7ba2792f-fe18-4493-b2d8-b147cc675bb2", 00:12:50.042 "is_configured": true, 00:12:50.042 "data_offset": 2048, 00:12:50.042 "data_size": 63488 00:12:50.042 }, 00:12:50.042 { 00:12:50.042 "name": null, 00:12:50.042 "uuid": "99971e0d-9922-48f6-bc01-908060724a8a", 00:12:50.042 "is_configured": false, 00:12:50.042 "data_offset": 0, 00:12:50.042 "data_size": 63488 00:12:50.042 }, 00:12:50.042 { 00:12:50.042 "name": "BaseBdev3", 00:12:50.042 "uuid": "57224bcc-7427-40bd-acfe-c857159c6015", 00:12:50.042 "is_configured": true, 00:12:50.042 "data_offset": 2048, 00:12:50.042 "data_size": 63488 00:12:50.042 }, 00:12:50.042 { 00:12:50.042 "name": "BaseBdev4", 00:12:50.042 "uuid": "d7074667-bf2b-4b28-aad0-cc1e04a44bd6", 00:12:50.042 "is_configured": true, 00:12:50.042 "data_offset": 2048, 00:12:50.042 "data_size": 63488 00:12:50.042 } 00:12:50.042 ] 00:12:50.042 }' 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.042 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.300 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:50.300 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.300 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.300 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.300 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.300 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:50.300 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:50.300 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.300 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.300 [2024-11-05 11:28:49.544029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.559 "name": "Existed_Raid", 00:12:50.559 "uuid": "d5c7ca0e-8419-4791-9f64-d123db5dc07a", 00:12:50.559 "strip_size_kb": 64, 00:12:50.559 "state": "configuring", 00:12:50.559 "raid_level": "concat", 00:12:50.559 "superblock": true, 00:12:50.559 "num_base_bdevs": 4, 00:12:50.559 "num_base_bdevs_discovered": 2, 00:12:50.559 "num_base_bdevs_operational": 4, 00:12:50.559 "base_bdevs_list": [ 00:12:50.559 { 00:12:50.559 "name": null, 00:12:50.559 "uuid": "7ba2792f-fe18-4493-b2d8-b147cc675bb2", 00:12:50.559 "is_configured": false, 00:12:50.559 "data_offset": 0, 00:12:50.559 "data_size": 63488 00:12:50.559 }, 00:12:50.559 { 00:12:50.559 "name": null, 00:12:50.559 "uuid": "99971e0d-9922-48f6-bc01-908060724a8a", 00:12:50.559 "is_configured": false, 00:12:50.559 "data_offset": 0, 00:12:50.559 "data_size": 63488 00:12:50.559 }, 00:12:50.559 { 00:12:50.559 "name": "BaseBdev3", 00:12:50.559 "uuid": "57224bcc-7427-40bd-acfe-c857159c6015", 00:12:50.559 "is_configured": true, 00:12:50.559 "data_offset": 2048, 00:12:50.559 "data_size": 63488 00:12:50.559 }, 00:12:50.559 { 00:12:50.559 "name": "BaseBdev4", 00:12:50.559 "uuid": "d7074667-bf2b-4b28-aad0-cc1e04a44bd6", 00:12:50.559 "is_configured": true, 00:12:50.559 "data_offset": 2048, 00:12:50.559 "data_size": 63488 00:12:50.559 } 00:12:50.559 ] 00:12:50.559 }' 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.559 11:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.818 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.818 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:50.818 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.818 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.077 [2024-11-05 11:28:50.130263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.077 "name": "Existed_Raid", 00:12:51.077 "uuid": "d5c7ca0e-8419-4791-9f64-d123db5dc07a", 00:12:51.077 "strip_size_kb": 64, 00:12:51.077 "state": "configuring", 00:12:51.077 "raid_level": "concat", 00:12:51.077 "superblock": true, 00:12:51.077 "num_base_bdevs": 4, 00:12:51.077 "num_base_bdevs_discovered": 3, 00:12:51.077 "num_base_bdevs_operational": 4, 00:12:51.077 "base_bdevs_list": [ 00:12:51.077 { 00:12:51.077 "name": null, 00:12:51.077 "uuid": "7ba2792f-fe18-4493-b2d8-b147cc675bb2", 00:12:51.077 "is_configured": false, 00:12:51.077 "data_offset": 0, 00:12:51.077 "data_size": 63488 00:12:51.077 }, 00:12:51.077 { 00:12:51.077 "name": "BaseBdev2", 00:12:51.077 "uuid": "99971e0d-9922-48f6-bc01-908060724a8a", 00:12:51.077 "is_configured": true, 00:12:51.077 "data_offset": 2048, 00:12:51.077 "data_size": 63488 00:12:51.077 }, 00:12:51.077 { 00:12:51.077 "name": "BaseBdev3", 00:12:51.077 "uuid": "57224bcc-7427-40bd-acfe-c857159c6015", 00:12:51.077 "is_configured": true, 00:12:51.077 "data_offset": 2048, 00:12:51.077 "data_size": 63488 00:12:51.077 }, 00:12:51.077 { 00:12:51.077 "name": "BaseBdev4", 00:12:51.077 "uuid": "d7074667-bf2b-4b28-aad0-cc1e04a44bd6", 00:12:51.077 "is_configured": true, 00:12:51.077 "data_offset": 2048, 00:12:51.077 "data_size": 63488 00:12:51.077 } 00:12:51.077 ] 00:12:51.077 }' 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.077 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.336 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.336 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.336 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:51.336 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7ba2792f-fe18-4493-b2d8-b147cc675bb2 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.597 [2024-11-05 11:28:50.732518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:51.597 [2024-11-05 11:28:50.732756] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:51.597 [2024-11-05 11:28:50.732769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:51.597 [2024-11-05 11:28:50.733022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:51.597 [2024-11-05 11:28:50.733203] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:51.597 [2024-11-05 11:28:50.733218] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:51.597 [2024-11-05 11:28:50.733343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.597 NewBaseBdev 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.597 [ 00:12:51.597 { 00:12:51.597 "name": "NewBaseBdev", 00:12:51.597 "aliases": [ 00:12:51.597 "7ba2792f-fe18-4493-b2d8-b147cc675bb2" 00:12:51.597 ], 00:12:51.597 "product_name": "Malloc disk", 00:12:51.597 "block_size": 512, 00:12:51.597 "num_blocks": 65536, 00:12:51.597 "uuid": "7ba2792f-fe18-4493-b2d8-b147cc675bb2", 00:12:51.597 "assigned_rate_limits": { 00:12:51.597 "rw_ios_per_sec": 0, 00:12:51.597 "rw_mbytes_per_sec": 0, 00:12:51.597 "r_mbytes_per_sec": 0, 00:12:51.597 "w_mbytes_per_sec": 0 00:12:51.597 }, 00:12:51.597 "claimed": true, 00:12:51.597 "claim_type": "exclusive_write", 00:12:51.597 "zoned": false, 00:12:51.597 "supported_io_types": { 00:12:51.597 "read": true, 00:12:51.597 "write": true, 00:12:51.597 "unmap": true, 00:12:51.597 "flush": true, 00:12:51.597 "reset": true, 00:12:51.597 "nvme_admin": false, 00:12:51.597 "nvme_io": false, 00:12:51.597 "nvme_io_md": false, 00:12:51.597 "write_zeroes": true, 00:12:51.597 "zcopy": true, 00:12:51.597 "get_zone_info": false, 00:12:51.597 "zone_management": false, 00:12:51.597 "zone_append": false, 00:12:51.597 "compare": false, 00:12:51.597 "compare_and_write": false, 00:12:51.597 "abort": true, 00:12:51.597 "seek_hole": false, 00:12:51.597 "seek_data": false, 00:12:51.597 "copy": true, 00:12:51.597 "nvme_iov_md": false 00:12:51.597 }, 00:12:51.597 "memory_domains": [ 00:12:51.597 { 00:12:51.597 "dma_device_id": "system", 00:12:51.597 "dma_device_type": 1 00:12:51.597 }, 00:12:51.597 { 00:12:51.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.597 "dma_device_type": 2 00:12:51.597 } 00:12:51.597 ], 00:12:51.597 "driver_specific": {} 00:12:51.597 } 00:12:51.597 ] 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.597 "name": "Existed_Raid", 00:12:51.597 "uuid": "d5c7ca0e-8419-4791-9f64-d123db5dc07a", 00:12:51.597 "strip_size_kb": 64, 00:12:51.597 "state": "online", 00:12:51.597 "raid_level": "concat", 00:12:51.597 "superblock": true, 00:12:51.597 "num_base_bdevs": 4, 00:12:51.597 "num_base_bdevs_discovered": 4, 00:12:51.597 "num_base_bdevs_operational": 4, 00:12:51.597 "base_bdevs_list": [ 00:12:51.597 { 00:12:51.597 "name": "NewBaseBdev", 00:12:51.597 "uuid": "7ba2792f-fe18-4493-b2d8-b147cc675bb2", 00:12:51.597 "is_configured": true, 00:12:51.597 "data_offset": 2048, 00:12:51.597 "data_size": 63488 00:12:51.597 }, 00:12:51.597 { 00:12:51.597 "name": "BaseBdev2", 00:12:51.597 "uuid": "99971e0d-9922-48f6-bc01-908060724a8a", 00:12:51.597 "is_configured": true, 00:12:51.597 "data_offset": 2048, 00:12:51.597 "data_size": 63488 00:12:51.597 }, 00:12:51.597 { 00:12:51.597 "name": "BaseBdev3", 00:12:51.597 "uuid": "57224bcc-7427-40bd-acfe-c857159c6015", 00:12:51.597 "is_configured": true, 00:12:51.597 "data_offset": 2048, 00:12:51.597 "data_size": 63488 00:12:51.597 }, 00:12:51.597 { 00:12:51.597 "name": "BaseBdev4", 00:12:51.597 "uuid": "d7074667-bf2b-4b28-aad0-cc1e04a44bd6", 00:12:51.597 "is_configured": true, 00:12:51.597 "data_offset": 2048, 00:12:51.597 "data_size": 63488 00:12:51.597 } 00:12:51.597 ] 00:12:51.597 }' 00:12:51.597 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.598 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.167 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.168 [2024-11-05 11:28:51.160304] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:52.168 "name": "Existed_Raid", 00:12:52.168 "aliases": [ 00:12:52.168 "d5c7ca0e-8419-4791-9f64-d123db5dc07a" 00:12:52.168 ], 00:12:52.168 "product_name": "Raid Volume", 00:12:52.168 "block_size": 512, 00:12:52.168 "num_blocks": 253952, 00:12:52.168 "uuid": "d5c7ca0e-8419-4791-9f64-d123db5dc07a", 00:12:52.168 "assigned_rate_limits": { 00:12:52.168 "rw_ios_per_sec": 0, 00:12:52.168 "rw_mbytes_per_sec": 0, 00:12:52.168 "r_mbytes_per_sec": 0, 00:12:52.168 "w_mbytes_per_sec": 0 00:12:52.168 }, 00:12:52.168 "claimed": false, 00:12:52.168 "zoned": false, 00:12:52.168 "supported_io_types": { 00:12:52.168 "read": true, 00:12:52.168 "write": true, 00:12:52.168 "unmap": true, 00:12:52.168 "flush": true, 00:12:52.168 "reset": true, 00:12:52.168 "nvme_admin": false, 00:12:52.168 "nvme_io": false, 00:12:52.168 "nvme_io_md": false, 00:12:52.168 "write_zeroes": true, 00:12:52.168 "zcopy": false, 00:12:52.168 "get_zone_info": false, 00:12:52.168 "zone_management": false, 00:12:52.168 "zone_append": false, 00:12:52.168 "compare": false, 00:12:52.168 "compare_and_write": false, 00:12:52.168 "abort": false, 00:12:52.168 "seek_hole": false, 00:12:52.168 "seek_data": false, 00:12:52.168 "copy": false, 00:12:52.168 "nvme_iov_md": false 00:12:52.168 }, 00:12:52.168 "memory_domains": [ 00:12:52.168 { 00:12:52.168 "dma_device_id": "system", 00:12:52.168 "dma_device_type": 1 00:12:52.168 }, 00:12:52.168 { 00:12:52.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.168 "dma_device_type": 2 00:12:52.168 }, 00:12:52.168 { 00:12:52.168 "dma_device_id": "system", 00:12:52.168 "dma_device_type": 1 00:12:52.168 }, 00:12:52.168 { 00:12:52.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.168 "dma_device_type": 2 00:12:52.168 }, 00:12:52.168 { 00:12:52.168 "dma_device_id": "system", 00:12:52.168 "dma_device_type": 1 00:12:52.168 }, 00:12:52.168 { 00:12:52.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.168 "dma_device_type": 2 00:12:52.168 }, 00:12:52.168 { 00:12:52.168 "dma_device_id": "system", 00:12:52.168 "dma_device_type": 1 00:12:52.168 }, 00:12:52.168 { 00:12:52.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.168 "dma_device_type": 2 00:12:52.168 } 00:12:52.168 ], 00:12:52.168 "driver_specific": { 00:12:52.168 "raid": { 00:12:52.168 "uuid": "d5c7ca0e-8419-4791-9f64-d123db5dc07a", 00:12:52.168 "strip_size_kb": 64, 00:12:52.168 "state": "online", 00:12:52.168 "raid_level": "concat", 00:12:52.168 "superblock": true, 00:12:52.168 "num_base_bdevs": 4, 00:12:52.168 "num_base_bdevs_discovered": 4, 00:12:52.168 "num_base_bdevs_operational": 4, 00:12:52.168 "base_bdevs_list": [ 00:12:52.168 { 00:12:52.168 "name": "NewBaseBdev", 00:12:52.168 "uuid": "7ba2792f-fe18-4493-b2d8-b147cc675bb2", 00:12:52.168 "is_configured": true, 00:12:52.168 "data_offset": 2048, 00:12:52.168 "data_size": 63488 00:12:52.168 }, 00:12:52.168 { 00:12:52.168 "name": "BaseBdev2", 00:12:52.168 "uuid": "99971e0d-9922-48f6-bc01-908060724a8a", 00:12:52.168 "is_configured": true, 00:12:52.168 "data_offset": 2048, 00:12:52.168 "data_size": 63488 00:12:52.168 }, 00:12:52.168 { 00:12:52.168 "name": "BaseBdev3", 00:12:52.168 "uuid": "57224bcc-7427-40bd-acfe-c857159c6015", 00:12:52.168 "is_configured": true, 00:12:52.168 "data_offset": 2048, 00:12:52.168 "data_size": 63488 00:12:52.168 }, 00:12:52.168 { 00:12:52.168 "name": "BaseBdev4", 00:12:52.168 "uuid": "d7074667-bf2b-4b28-aad0-cc1e04a44bd6", 00:12:52.168 "is_configured": true, 00:12:52.168 "data_offset": 2048, 00:12:52.168 "data_size": 63488 00:12:52.168 } 00:12:52.168 ] 00:12:52.168 } 00:12:52.168 } 00:12:52.168 }' 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:52.168 BaseBdev2 00:12:52.168 BaseBdev3 00:12:52.168 BaseBdev4' 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.168 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.428 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.428 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:52.429 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.429 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.429 [2024-11-05 11:28:51.447391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.429 [2024-11-05 11:28:51.447423] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.429 [2024-11-05 11:28:51.447494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.429 [2024-11-05 11:28:51.447566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.429 [2024-11-05 11:28:51.447576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:52.429 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.429 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72069 00:12:52.429 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 72069 ']' 00:12:52.429 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 72069 00:12:52.429 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:52.429 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:52.429 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72069 00:12:52.429 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:52.429 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:52.429 killing process with pid 72069 00:12:52.429 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72069' 00:12:52.429 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 72069 00:12:52.429 [2024-11-05 11:28:51.485945] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:52.429 11:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 72069 00:12:52.688 [2024-11-05 11:28:51.881833] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:54.069 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:54.069 00:12:54.069 real 0m11.369s 00:12:54.069 user 0m18.110s 00:12:54.069 sys 0m2.012s 00:12:54.069 11:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:54.069 11:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.069 ************************************ 00:12:54.069 END TEST raid_state_function_test_sb 00:12:54.069 ************************************ 00:12:54.069 11:28:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:54.069 11:28:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:54.069 11:28:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:54.069 11:28:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:54.069 ************************************ 00:12:54.069 START TEST raid_superblock_test 00:12:54.069 ************************************ 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72745 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72745 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72745 ']' 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:54.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:54.069 11:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.069 [2024-11-05 11:28:53.140687] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:12:54.069 [2024-11-05 11:28:53.141166] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72745 ] 00:12:54.069 [2024-11-05 11:28:53.315127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.331 [2024-11-05 11:28:53.426819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.589 [2024-11-05 11:28:53.630721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.589 [2024-11-05 11:28:53.630767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.849 11:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:54.849 11:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:54.849 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:54.849 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.849 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:54.849 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:54.849 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:54.849 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:54.849 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:54.849 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:54.849 11:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:54.849 11:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.849 11:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.849 malloc1 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.849 [2024-11-05 11:28:54.016048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:54.849 [2024-11-05 11:28:54.016115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.849 [2024-11-05 11:28:54.016154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:54.849 [2024-11-05 11:28:54.016165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.849 [2024-11-05 11:28:54.018256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.849 [2024-11-05 11:28:54.018288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:54.849 pt1 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.849 malloc2 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.849 [2024-11-05 11:28:54.071387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:54.849 [2024-11-05 11:28:54.071437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.849 [2024-11-05 11:28:54.071459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:54.849 [2024-11-05 11:28:54.071467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.849 [2024-11-05 11:28:54.073460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.849 [2024-11-05 11:28:54.073492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:54.849 pt2 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.849 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.110 malloc3 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.110 [2024-11-05 11:28:54.136412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:55.110 [2024-11-05 11:28:54.136465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.110 [2024-11-05 11:28:54.136486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:55.110 [2024-11-05 11:28:54.136496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.110 [2024-11-05 11:28:54.138718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.110 [2024-11-05 11:28:54.138750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:55.110 pt3 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.110 malloc4 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.110 [2024-11-05 11:28:54.191738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:55.110 [2024-11-05 11:28:54.191789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.110 [2024-11-05 11:28:54.191807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:55.110 [2024-11-05 11:28:54.191816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.110 [2024-11-05 11:28:54.193893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.110 [2024-11-05 11:28:54.193926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:55.110 pt4 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.110 [2024-11-05 11:28:54.203761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:55.110 [2024-11-05 11:28:54.205637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:55.110 [2024-11-05 11:28:54.205702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:55.110 [2024-11-05 11:28:54.205763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:55.110 [2024-11-05 11:28:54.205945] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:55.110 [2024-11-05 11:28:54.205985] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:55.110 [2024-11-05 11:28:54.206260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:55.110 [2024-11-05 11:28:54.206441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:55.110 [2024-11-05 11:28:54.206462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:55.110 [2024-11-05 11:28:54.206615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.110 "name": "raid_bdev1", 00:12:55.110 "uuid": "4332ac50-667e-4d7c-bfd4-b85ddaa5621d", 00:12:55.110 "strip_size_kb": 64, 00:12:55.110 "state": "online", 00:12:55.110 "raid_level": "concat", 00:12:55.110 "superblock": true, 00:12:55.110 "num_base_bdevs": 4, 00:12:55.110 "num_base_bdevs_discovered": 4, 00:12:55.110 "num_base_bdevs_operational": 4, 00:12:55.110 "base_bdevs_list": [ 00:12:55.110 { 00:12:55.110 "name": "pt1", 00:12:55.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.110 "is_configured": true, 00:12:55.110 "data_offset": 2048, 00:12:55.110 "data_size": 63488 00:12:55.110 }, 00:12:55.110 { 00:12:55.110 "name": "pt2", 00:12:55.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.110 "is_configured": true, 00:12:55.110 "data_offset": 2048, 00:12:55.110 "data_size": 63488 00:12:55.110 }, 00:12:55.110 { 00:12:55.110 "name": "pt3", 00:12:55.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.110 "is_configured": true, 00:12:55.110 "data_offset": 2048, 00:12:55.110 "data_size": 63488 00:12:55.110 }, 00:12:55.110 { 00:12:55.110 "name": "pt4", 00:12:55.110 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:55.110 "is_configured": true, 00:12:55.110 "data_offset": 2048, 00:12:55.110 "data_size": 63488 00:12:55.110 } 00:12:55.110 ] 00:12:55.110 }' 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.110 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.370 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:55.370 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:55.370 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:55.370 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:55.370 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:55.370 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:55.370 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.370 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.370 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.370 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:55.370 [2024-11-05 11:28:54.631465] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.635 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.635 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:55.635 "name": "raid_bdev1", 00:12:55.635 "aliases": [ 00:12:55.635 "4332ac50-667e-4d7c-bfd4-b85ddaa5621d" 00:12:55.635 ], 00:12:55.635 "product_name": "Raid Volume", 00:12:55.635 "block_size": 512, 00:12:55.635 "num_blocks": 253952, 00:12:55.635 "uuid": "4332ac50-667e-4d7c-bfd4-b85ddaa5621d", 00:12:55.635 "assigned_rate_limits": { 00:12:55.635 "rw_ios_per_sec": 0, 00:12:55.635 "rw_mbytes_per_sec": 0, 00:12:55.635 "r_mbytes_per_sec": 0, 00:12:55.635 "w_mbytes_per_sec": 0 00:12:55.635 }, 00:12:55.636 "claimed": false, 00:12:55.636 "zoned": false, 00:12:55.636 "supported_io_types": { 00:12:55.636 "read": true, 00:12:55.636 "write": true, 00:12:55.636 "unmap": true, 00:12:55.636 "flush": true, 00:12:55.636 "reset": true, 00:12:55.636 "nvme_admin": false, 00:12:55.636 "nvme_io": false, 00:12:55.636 "nvme_io_md": false, 00:12:55.636 "write_zeroes": true, 00:12:55.636 "zcopy": false, 00:12:55.636 "get_zone_info": false, 00:12:55.636 "zone_management": false, 00:12:55.636 "zone_append": false, 00:12:55.636 "compare": false, 00:12:55.636 "compare_and_write": false, 00:12:55.636 "abort": false, 00:12:55.636 "seek_hole": false, 00:12:55.636 "seek_data": false, 00:12:55.636 "copy": false, 00:12:55.636 "nvme_iov_md": false 00:12:55.636 }, 00:12:55.636 "memory_domains": [ 00:12:55.636 { 00:12:55.636 "dma_device_id": "system", 00:12:55.636 "dma_device_type": 1 00:12:55.636 }, 00:12:55.636 { 00:12:55.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.636 "dma_device_type": 2 00:12:55.636 }, 00:12:55.636 { 00:12:55.636 "dma_device_id": "system", 00:12:55.636 "dma_device_type": 1 00:12:55.636 }, 00:12:55.636 { 00:12:55.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.636 "dma_device_type": 2 00:12:55.636 }, 00:12:55.636 { 00:12:55.636 "dma_device_id": "system", 00:12:55.636 "dma_device_type": 1 00:12:55.636 }, 00:12:55.636 { 00:12:55.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.636 "dma_device_type": 2 00:12:55.636 }, 00:12:55.636 { 00:12:55.636 "dma_device_id": "system", 00:12:55.636 "dma_device_type": 1 00:12:55.636 }, 00:12:55.636 { 00:12:55.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.636 "dma_device_type": 2 00:12:55.636 } 00:12:55.636 ], 00:12:55.636 "driver_specific": { 00:12:55.636 "raid": { 00:12:55.636 "uuid": "4332ac50-667e-4d7c-bfd4-b85ddaa5621d", 00:12:55.636 "strip_size_kb": 64, 00:12:55.636 "state": "online", 00:12:55.636 "raid_level": "concat", 00:12:55.636 "superblock": true, 00:12:55.636 "num_base_bdevs": 4, 00:12:55.636 "num_base_bdevs_discovered": 4, 00:12:55.636 "num_base_bdevs_operational": 4, 00:12:55.636 "base_bdevs_list": [ 00:12:55.636 { 00:12:55.636 "name": "pt1", 00:12:55.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.636 "is_configured": true, 00:12:55.636 "data_offset": 2048, 00:12:55.636 "data_size": 63488 00:12:55.636 }, 00:12:55.636 { 00:12:55.636 "name": "pt2", 00:12:55.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.636 "is_configured": true, 00:12:55.636 "data_offset": 2048, 00:12:55.636 "data_size": 63488 00:12:55.636 }, 00:12:55.636 { 00:12:55.636 "name": "pt3", 00:12:55.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.636 "is_configured": true, 00:12:55.636 "data_offset": 2048, 00:12:55.636 "data_size": 63488 00:12:55.636 }, 00:12:55.636 { 00:12:55.636 "name": "pt4", 00:12:55.636 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:55.636 "is_configured": true, 00:12:55.636 "data_offset": 2048, 00:12:55.636 "data_size": 63488 00:12:55.636 } 00:12:55.636 ] 00:12:55.636 } 00:12:55.636 } 00:12:55.636 }' 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:55.636 pt2 00:12:55.636 pt3 00:12:55.636 pt4' 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.636 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:55.917 [2024-11-05 11:28:54.914784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4332ac50-667e-4d7c-bfd4-b85ddaa5621d 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4332ac50-667e-4d7c-bfd4-b85ddaa5621d ']' 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.917 [2024-11-05 11:28:54.958427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.917 [2024-11-05 11:28:54.958456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.917 [2024-11-05 11:28:54.958532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.917 [2024-11-05 11:28:54.958605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.917 [2024-11-05 11:28:54.958620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.917 11:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.917 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.917 [2024-11-05 11:28:55.118189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:55.917 [2024-11-05 11:28:55.120130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:55.917 [2024-11-05 11:28:55.120211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:55.917 [2024-11-05 11:28:55.120246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:55.917 [2024-11-05 11:28:55.120312] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:55.917 [2024-11-05 11:28:55.120362] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:55.917 [2024-11-05 11:28:55.120393] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:55.917 [2024-11-05 11:28:55.120412] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:55.917 [2024-11-05 11:28:55.120424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.917 [2024-11-05 11:28:55.120434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:55.917 request: 00:12:55.917 { 00:12:55.917 "name": "raid_bdev1", 00:12:55.917 "raid_level": "concat", 00:12:55.917 "base_bdevs": [ 00:12:55.917 "malloc1", 00:12:55.917 "malloc2", 00:12:55.917 "malloc3", 00:12:55.917 "malloc4" 00:12:55.917 ], 00:12:55.917 "strip_size_kb": 64, 00:12:55.917 "superblock": false, 00:12:55.917 "method": "bdev_raid_create", 00:12:55.917 "req_id": 1 00:12:55.917 } 00:12:55.917 Got JSON-RPC error response 00:12:55.917 response: 00:12:55.917 { 00:12:55.917 "code": -17, 00:12:55.917 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:55.917 } 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.918 [2024-11-05 11:28:55.182035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:55.918 [2024-11-05 11:28:55.182080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.918 [2024-11-05 11:28:55.182095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:55.918 [2024-11-05 11:28:55.182104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.918 [2024-11-05 11:28:55.184218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.918 [2024-11-05 11:28:55.184253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:55.918 [2024-11-05 11:28:55.184337] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:55.918 [2024-11-05 11:28:55.184392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:55.918 pt1 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.918 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.178 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.178 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.178 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.178 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.178 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.178 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.178 "name": "raid_bdev1", 00:12:56.178 "uuid": "4332ac50-667e-4d7c-bfd4-b85ddaa5621d", 00:12:56.178 "strip_size_kb": 64, 00:12:56.178 "state": "configuring", 00:12:56.178 "raid_level": "concat", 00:12:56.178 "superblock": true, 00:12:56.178 "num_base_bdevs": 4, 00:12:56.178 "num_base_bdevs_discovered": 1, 00:12:56.178 "num_base_bdevs_operational": 4, 00:12:56.178 "base_bdevs_list": [ 00:12:56.178 { 00:12:56.178 "name": "pt1", 00:12:56.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.178 "is_configured": true, 00:12:56.178 "data_offset": 2048, 00:12:56.178 "data_size": 63488 00:12:56.178 }, 00:12:56.178 { 00:12:56.178 "name": null, 00:12:56.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.178 "is_configured": false, 00:12:56.178 "data_offset": 2048, 00:12:56.178 "data_size": 63488 00:12:56.178 }, 00:12:56.178 { 00:12:56.178 "name": null, 00:12:56.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.178 "is_configured": false, 00:12:56.178 "data_offset": 2048, 00:12:56.178 "data_size": 63488 00:12:56.178 }, 00:12:56.178 { 00:12:56.178 "name": null, 00:12:56.178 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:56.178 "is_configured": false, 00:12:56.178 "data_offset": 2048, 00:12:56.178 "data_size": 63488 00:12:56.178 } 00:12:56.178 ] 00:12:56.178 }' 00:12:56.178 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.178 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.439 [2024-11-05 11:28:55.617356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:56.439 [2024-11-05 11:28:55.617467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.439 [2024-11-05 11:28:55.617488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:56.439 [2024-11-05 11:28:55.617499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.439 [2024-11-05 11:28:55.617969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.439 [2024-11-05 11:28:55.617990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:56.439 [2024-11-05 11:28:55.618077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:56.439 [2024-11-05 11:28:55.618101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:56.439 pt2 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.439 [2024-11-05 11:28:55.629381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.439 "name": "raid_bdev1", 00:12:56.439 "uuid": "4332ac50-667e-4d7c-bfd4-b85ddaa5621d", 00:12:56.439 "strip_size_kb": 64, 00:12:56.439 "state": "configuring", 00:12:56.439 "raid_level": "concat", 00:12:56.439 "superblock": true, 00:12:56.439 "num_base_bdevs": 4, 00:12:56.439 "num_base_bdevs_discovered": 1, 00:12:56.439 "num_base_bdevs_operational": 4, 00:12:56.439 "base_bdevs_list": [ 00:12:56.439 { 00:12:56.439 "name": "pt1", 00:12:56.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.439 "is_configured": true, 00:12:56.439 "data_offset": 2048, 00:12:56.439 "data_size": 63488 00:12:56.439 }, 00:12:56.439 { 00:12:56.439 "name": null, 00:12:56.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.439 "is_configured": false, 00:12:56.439 "data_offset": 0, 00:12:56.439 "data_size": 63488 00:12:56.439 }, 00:12:56.439 { 00:12:56.439 "name": null, 00:12:56.439 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.439 "is_configured": false, 00:12:56.439 "data_offset": 2048, 00:12:56.439 "data_size": 63488 00:12:56.439 }, 00:12:56.439 { 00:12:56.439 "name": null, 00:12:56.439 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:56.439 "is_configured": false, 00:12:56.439 "data_offset": 2048, 00:12:56.439 "data_size": 63488 00:12:56.439 } 00:12:56.439 ] 00:12:56.439 }' 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.439 11:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.008 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.009 [2024-11-05 11:28:56.092548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:57.009 [2024-11-05 11:28:56.092615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.009 [2024-11-05 11:28:56.092636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:57.009 [2024-11-05 11:28:56.092645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.009 [2024-11-05 11:28:56.093103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.009 [2024-11-05 11:28:56.093120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:57.009 [2024-11-05 11:28:56.093227] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:57.009 [2024-11-05 11:28:56.093251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:57.009 pt2 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.009 [2024-11-05 11:28:56.104498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:57.009 [2024-11-05 11:28:56.104562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.009 [2024-11-05 11:28:56.104586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:57.009 [2024-11-05 11:28:56.104597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.009 [2024-11-05 11:28:56.104963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.009 [2024-11-05 11:28:56.104984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:57.009 [2024-11-05 11:28:56.105052] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:57.009 [2024-11-05 11:28:56.105070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:57.009 pt3 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.009 [2024-11-05 11:28:56.116475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:57.009 [2024-11-05 11:28:56.116537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.009 [2024-11-05 11:28:56.116557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:57.009 [2024-11-05 11:28:56.116565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.009 [2024-11-05 11:28:56.116962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.009 [2024-11-05 11:28:56.116979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:57.009 [2024-11-05 11:28:56.117047] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:57.009 [2024-11-05 11:28:56.117066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:57.009 [2024-11-05 11:28:56.117218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:57.009 [2024-11-05 11:28:56.117227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:57.009 [2024-11-05 11:28:56.117465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:57.009 [2024-11-05 11:28:56.117617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:57.009 [2024-11-05 11:28:56.117640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:57.009 [2024-11-05 11:28:56.117778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.009 pt4 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.009 "name": "raid_bdev1", 00:12:57.009 "uuid": "4332ac50-667e-4d7c-bfd4-b85ddaa5621d", 00:12:57.009 "strip_size_kb": 64, 00:12:57.009 "state": "online", 00:12:57.009 "raid_level": "concat", 00:12:57.009 "superblock": true, 00:12:57.009 "num_base_bdevs": 4, 00:12:57.009 "num_base_bdevs_discovered": 4, 00:12:57.009 "num_base_bdevs_operational": 4, 00:12:57.009 "base_bdevs_list": [ 00:12:57.009 { 00:12:57.009 "name": "pt1", 00:12:57.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.009 "is_configured": true, 00:12:57.009 "data_offset": 2048, 00:12:57.009 "data_size": 63488 00:12:57.009 }, 00:12:57.009 { 00:12:57.009 "name": "pt2", 00:12:57.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.009 "is_configured": true, 00:12:57.009 "data_offset": 2048, 00:12:57.009 "data_size": 63488 00:12:57.009 }, 00:12:57.009 { 00:12:57.009 "name": "pt3", 00:12:57.009 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.009 "is_configured": true, 00:12:57.009 "data_offset": 2048, 00:12:57.009 "data_size": 63488 00:12:57.009 }, 00:12:57.009 { 00:12:57.009 "name": "pt4", 00:12:57.009 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:57.009 "is_configured": true, 00:12:57.009 "data_offset": 2048, 00:12:57.009 "data_size": 63488 00:12:57.009 } 00:12:57.009 ] 00:12:57.009 }' 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.009 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.578 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:57.578 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:57.578 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.578 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.578 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.578 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.578 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.578 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.578 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.578 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.578 [2024-11-05 11:28:56.588036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.578 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.578 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.578 "name": "raid_bdev1", 00:12:57.578 "aliases": [ 00:12:57.578 "4332ac50-667e-4d7c-bfd4-b85ddaa5621d" 00:12:57.578 ], 00:12:57.578 "product_name": "Raid Volume", 00:12:57.578 "block_size": 512, 00:12:57.578 "num_blocks": 253952, 00:12:57.578 "uuid": "4332ac50-667e-4d7c-bfd4-b85ddaa5621d", 00:12:57.578 "assigned_rate_limits": { 00:12:57.578 "rw_ios_per_sec": 0, 00:12:57.578 "rw_mbytes_per_sec": 0, 00:12:57.578 "r_mbytes_per_sec": 0, 00:12:57.578 "w_mbytes_per_sec": 0 00:12:57.578 }, 00:12:57.578 "claimed": false, 00:12:57.578 "zoned": false, 00:12:57.578 "supported_io_types": { 00:12:57.578 "read": true, 00:12:57.578 "write": true, 00:12:57.578 "unmap": true, 00:12:57.578 "flush": true, 00:12:57.578 "reset": true, 00:12:57.578 "nvme_admin": false, 00:12:57.578 "nvme_io": false, 00:12:57.578 "nvme_io_md": false, 00:12:57.578 "write_zeroes": true, 00:12:57.578 "zcopy": false, 00:12:57.578 "get_zone_info": false, 00:12:57.578 "zone_management": false, 00:12:57.578 "zone_append": false, 00:12:57.578 "compare": false, 00:12:57.578 "compare_and_write": false, 00:12:57.578 "abort": false, 00:12:57.578 "seek_hole": false, 00:12:57.578 "seek_data": false, 00:12:57.578 "copy": false, 00:12:57.578 "nvme_iov_md": false 00:12:57.578 }, 00:12:57.578 "memory_domains": [ 00:12:57.578 { 00:12:57.578 "dma_device_id": "system", 00:12:57.578 "dma_device_type": 1 00:12:57.578 }, 00:12:57.578 { 00:12:57.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.578 "dma_device_type": 2 00:12:57.578 }, 00:12:57.578 { 00:12:57.578 "dma_device_id": "system", 00:12:57.578 "dma_device_type": 1 00:12:57.578 }, 00:12:57.578 { 00:12:57.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.578 "dma_device_type": 2 00:12:57.578 }, 00:12:57.578 { 00:12:57.578 "dma_device_id": "system", 00:12:57.578 "dma_device_type": 1 00:12:57.578 }, 00:12:57.578 { 00:12:57.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.578 "dma_device_type": 2 00:12:57.578 }, 00:12:57.578 { 00:12:57.578 "dma_device_id": "system", 00:12:57.579 "dma_device_type": 1 00:12:57.579 }, 00:12:57.579 { 00:12:57.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.579 "dma_device_type": 2 00:12:57.579 } 00:12:57.579 ], 00:12:57.579 "driver_specific": { 00:12:57.579 "raid": { 00:12:57.579 "uuid": "4332ac50-667e-4d7c-bfd4-b85ddaa5621d", 00:12:57.579 "strip_size_kb": 64, 00:12:57.579 "state": "online", 00:12:57.579 "raid_level": "concat", 00:12:57.579 "superblock": true, 00:12:57.579 "num_base_bdevs": 4, 00:12:57.579 "num_base_bdevs_discovered": 4, 00:12:57.579 "num_base_bdevs_operational": 4, 00:12:57.579 "base_bdevs_list": [ 00:12:57.579 { 00:12:57.579 "name": "pt1", 00:12:57.579 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.579 "is_configured": true, 00:12:57.579 "data_offset": 2048, 00:12:57.579 "data_size": 63488 00:12:57.579 }, 00:12:57.579 { 00:12:57.579 "name": "pt2", 00:12:57.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.579 "is_configured": true, 00:12:57.579 "data_offset": 2048, 00:12:57.579 "data_size": 63488 00:12:57.579 }, 00:12:57.579 { 00:12:57.579 "name": "pt3", 00:12:57.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.579 "is_configured": true, 00:12:57.579 "data_offset": 2048, 00:12:57.579 "data_size": 63488 00:12:57.579 }, 00:12:57.579 { 00:12:57.579 "name": "pt4", 00:12:57.579 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:57.579 "is_configured": true, 00:12:57.579 "data_offset": 2048, 00:12:57.579 "data_size": 63488 00:12:57.579 } 00:12:57.579 ] 00:12:57.579 } 00:12:57.579 } 00:12:57.579 }' 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:57.579 pt2 00:12:57.579 pt3 00:12:57.579 pt4' 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.579 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.838 [2024-11-05 11:28:56.895521] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4332ac50-667e-4d7c-bfd4-b85ddaa5621d '!=' 4332ac50-667e-4d7c-bfd4-b85ddaa5621d ']' 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72745 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72745 ']' 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72745 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72745 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:57.838 killing process with pid 72745 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72745' 00:12:57.838 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72745 00:12:57.838 [2024-11-05 11:28:56.977743] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:57.838 [2024-11-05 11:28:56.977843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.839 [2024-11-05 11:28:56.977916] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.839 11:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72745 00:12:57.839 [2024-11-05 11:28:56.977926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:58.097 [2024-11-05 11:28:57.371234] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.477 11:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:59.477 00:12:59.477 real 0m5.409s 00:12:59.477 user 0m7.758s 00:12:59.477 sys 0m0.959s 00:12:59.477 11:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:59.477 11:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.477 ************************************ 00:12:59.477 END TEST raid_superblock_test 00:12:59.477 ************************************ 00:12:59.477 11:28:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:59.477 11:28:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:59.477 11:28:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:59.477 11:28:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.477 ************************************ 00:12:59.477 START TEST raid_read_error_test 00:12:59.477 ************************************ 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8KLTzLSroz 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73005 00:12:59.477 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73005 00:12:59.478 11:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:59.478 11:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 73005 ']' 00:12:59.478 11:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.478 11:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:59.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.478 11:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.478 11:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:59.478 11:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.478 [2024-11-05 11:28:58.633370] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:12:59.478 [2024-11-05 11:28:58.633496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73005 ] 00:12:59.738 [2024-11-05 11:28:58.805415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.738 [2024-11-05 11:28:58.918324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.997 [2024-11-05 11:28:59.108918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.997 [2024-11-05 11:28:59.108971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.257 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:00.257 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:00.257 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:00.257 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:00.257 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.257 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.257 BaseBdev1_malloc 00:13:00.257 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.257 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:00.257 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.257 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.257 true 00:13:00.257 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.257 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:00.257 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.257 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.257 [2024-11-05 11:28:59.518103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:00.257 [2024-11-05 11:28:59.518171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.257 [2024-11-05 11:28:59.518194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:00.257 [2024-11-05 11:28:59.518205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.258 [2024-11-05 11:28:59.520473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.258 [2024-11-05 11:28:59.520512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:00.258 BaseBdev1 00:13:00.258 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.258 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:00.258 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:00.258 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.258 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.517 BaseBdev2_malloc 00:13:00.517 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.517 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:00.517 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.517 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.517 true 00:13:00.517 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.517 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:00.517 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.517 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.517 [2024-11-05 11:28:59.585853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:00.517 [2024-11-05 11:28:59.585928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.517 [2024-11-05 11:28:59.585953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:00.517 [2024-11-05 11:28:59.585965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.517 [2024-11-05 11:28:59.588504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.517 [2024-11-05 11:28:59.588595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:00.517 BaseBdev2 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.518 BaseBdev3_malloc 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.518 true 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.518 [2024-11-05 11:28:59.663692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:00.518 [2024-11-05 11:28:59.663748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.518 [2024-11-05 11:28:59.663764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:00.518 [2024-11-05 11:28:59.663775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.518 [2024-11-05 11:28:59.665991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.518 [2024-11-05 11:28:59.666031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:00.518 BaseBdev3 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.518 BaseBdev4_malloc 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.518 true 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.518 [2024-11-05 11:28:59.729581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:00.518 [2024-11-05 11:28:59.729653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.518 [2024-11-05 11:28:59.729676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:00.518 [2024-11-05 11:28:59.729687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.518 [2024-11-05 11:28:59.732015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.518 [2024-11-05 11:28:59.732066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:00.518 BaseBdev4 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.518 [2024-11-05 11:28:59.741620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.518 [2024-11-05 11:28:59.743537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:00.518 [2024-11-05 11:28:59.743623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:00.518 [2024-11-05 11:28:59.743702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:00.518 [2024-11-05 11:28:59.743963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:00.518 [2024-11-05 11:28:59.743978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:00.518 [2024-11-05 11:28:59.744290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:00.518 [2024-11-05 11:28:59.744469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:00.518 [2024-11-05 11:28:59.744485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:00.518 [2024-11-05 11:28:59.744678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.518 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.777 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.777 "name": "raid_bdev1", 00:13:00.777 "uuid": "bb24c250-ba5e-4357-bb04-8f5b1cbf2551", 00:13:00.777 "strip_size_kb": 64, 00:13:00.777 "state": "online", 00:13:00.777 "raid_level": "concat", 00:13:00.777 "superblock": true, 00:13:00.777 "num_base_bdevs": 4, 00:13:00.777 "num_base_bdevs_discovered": 4, 00:13:00.777 "num_base_bdevs_operational": 4, 00:13:00.777 "base_bdevs_list": [ 00:13:00.777 { 00:13:00.777 "name": "BaseBdev1", 00:13:00.778 "uuid": "399817c1-5549-56d9-9be3-ffce58ab6ad3", 00:13:00.778 "is_configured": true, 00:13:00.778 "data_offset": 2048, 00:13:00.778 "data_size": 63488 00:13:00.778 }, 00:13:00.778 { 00:13:00.778 "name": "BaseBdev2", 00:13:00.778 "uuid": "a3b52bd1-cc52-55e3-9046-94982a895e2b", 00:13:00.778 "is_configured": true, 00:13:00.778 "data_offset": 2048, 00:13:00.778 "data_size": 63488 00:13:00.778 }, 00:13:00.778 { 00:13:00.778 "name": "BaseBdev3", 00:13:00.778 "uuid": "55b28ea6-44d5-54a9-8c00-ac559db2a96c", 00:13:00.778 "is_configured": true, 00:13:00.778 "data_offset": 2048, 00:13:00.778 "data_size": 63488 00:13:00.778 }, 00:13:00.778 { 00:13:00.778 "name": "BaseBdev4", 00:13:00.778 "uuid": "1b1f6e92-ffc9-5f7f-878f-d1a3d79ccd33", 00:13:00.778 "is_configured": true, 00:13:00.778 "data_offset": 2048, 00:13:00.778 "data_size": 63488 00:13:00.778 } 00:13:00.778 ] 00:13:00.778 }' 00:13:00.778 11:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.778 11:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.036 11:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:01.036 11:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:01.036 [2024-11-05 11:29:00.245949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.982 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.982 "name": "raid_bdev1", 00:13:01.982 "uuid": "bb24c250-ba5e-4357-bb04-8f5b1cbf2551", 00:13:01.982 "strip_size_kb": 64, 00:13:01.982 "state": "online", 00:13:01.982 "raid_level": "concat", 00:13:01.982 "superblock": true, 00:13:01.982 "num_base_bdevs": 4, 00:13:01.982 "num_base_bdevs_discovered": 4, 00:13:01.982 "num_base_bdevs_operational": 4, 00:13:01.982 "base_bdevs_list": [ 00:13:01.982 { 00:13:01.982 "name": "BaseBdev1", 00:13:01.983 "uuid": "399817c1-5549-56d9-9be3-ffce58ab6ad3", 00:13:01.983 "is_configured": true, 00:13:01.983 "data_offset": 2048, 00:13:01.983 "data_size": 63488 00:13:01.983 }, 00:13:01.983 { 00:13:01.983 "name": "BaseBdev2", 00:13:01.983 "uuid": "a3b52bd1-cc52-55e3-9046-94982a895e2b", 00:13:01.983 "is_configured": true, 00:13:01.983 "data_offset": 2048, 00:13:01.983 "data_size": 63488 00:13:01.983 }, 00:13:01.983 { 00:13:01.983 "name": "BaseBdev3", 00:13:01.983 "uuid": "55b28ea6-44d5-54a9-8c00-ac559db2a96c", 00:13:01.983 "is_configured": true, 00:13:01.983 "data_offset": 2048, 00:13:01.983 "data_size": 63488 00:13:01.983 }, 00:13:01.983 { 00:13:01.983 "name": "BaseBdev4", 00:13:01.983 "uuid": "1b1f6e92-ffc9-5f7f-878f-d1a3d79ccd33", 00:13:01.983 "is_configured": true, 00:13:01.983 "data_offset": 2048, 00:13:01.983 "data_size": 63488 00:13:01.983 } 00:13:01.983 ] 00:13:01.983 }' 00:13:01.983 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.983 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.549 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:02.549 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.549 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.550 [2024-11-05 11:29:01.630058] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.550 [2024-11-05 11:29:01.630180] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.550 [2024-11-05 11:29:01.632861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.550 [2024-11-05 11:29:01.632964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.550 [2024-11-05 11:29:01.633013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.550 [2024-11-05 11:29:01.633028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:02.550 { 00:13:02.550 "results": [ 00:13:02.550 { 00:13:02.550 "job": "raid_bdev1", 00:13:02.550 "core_mask": "0x1", 00:13:02.550 "workload": "randrw", 00:13:02.550 "percentage": 50, 00:13:02.550 "status": "finished", 00:13:02.550 "queue_depth": 1, 00:13:02.550 "io_size": 131072, 00:13:02.550 "runtime": 1.384979, 00:13:02.550 "iops": 15925.151211679022, 00:13:02.550 "mibps": 1990.6439014598777, 00:13:02.550 "io_failed": 1, 00:13:02.550 "io_timeout": 0, 00:13:02.550 "avg_latency_us": 87.44164006990226, 00:13:02.550 "min_latency_us": 24.258515283842794, 00:13:02.550 "max_latency_us": 1452.380786026201 00:13:02.550 } 00:13:02.550 ], 00:13:02.550 "core_count": 1 00:13:02.550 } 00:13:02.550 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.550 11:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73005 00:13:02.550 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 73005 ']' 00:13:02.550 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 73005 00:13:02.550 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:13:02.550 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:02.550 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73005 00:13:02.550 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:02.550 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:02.550 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73005' 00:13:02.550 killing process with pid 73005 00:13:02.550 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 73005 00:13:02.550 [2024-11-05 11:29:01.663213] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:02.550 11:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 73005 00:13:02.809 [2024-11-05 11:29:01.991199] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:04.188 11:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:04.188 11:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8KLTzLSroz 00:13:04.188 11:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:04.188 11:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:13:04.188 11:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:04.188 11:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:04.188 11:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:04.188 11:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:13:04.188 00:13:04.188 real 0m4.613s 00:13:04.188 user 0m5.420s 00:13:04.188 sys 0m0.582s 00:13:04.188 11:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:04.188 11:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.188 ************************************ 00:13:04.188 END TEST raid_read_error_test 00:13:04.188 ************************************ 00:13:04.188 11:29:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:04.188 11:29:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:04.188 11:29:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:04.188 11:29:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:04.188 ************************************ 00:13:04.188 START TEST raid_write_error_test 00:13:04.188 ************************************ 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.z5UqJTVoAs 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73145 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73145 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73145 ']' 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:04.188 11:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.188 [2024-11-05 11:29:03.320764] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:13:04.188 [2024-11-05 11:29:03.320938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73145 ] 00:13:04.447 [2024-11-05 11:29:03.492410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.447 [2024-11-05 11:29:03.604694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.707 [2024-11-05 11:29:03.801715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.707 [2024-11-05 11:29:03.801883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.966 BaseBdev1_malloc 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.966 true 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.966 [2024-11-05 11:29:04.207396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:04.966 [2024-11-05 11:29:04.207523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.966 [2024-11-05 11:29:04.207550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:04.966 [2024-11-05 11:29:04.207562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.966 [2024-11-05 11:29:04.209751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.966 [2024-11-05 11:29:04.209830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:04.966 BaseBdev1 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.966 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.226 BaseBdev2_malloc 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.226 true 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.226 [2024-11-05 11:29:04.273643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:05.226 [2024-11-05 11:29:04.273696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.226 [2024-11-05 11:29:04.273728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:05.226 [2024-11-05 11:29:04.273738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.226 [2024-11-05 11:29:04.275805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.226 [2024-11-05 11:29:04.275845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:05.226 BaseBdev2 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.226 BaseBdev3_malloc 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.226 true 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.226 [2024-11-05 11:29:04.354021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:05.226 [2024-11-05 11:29:04.354089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.226 [2024-11-05 11:29:04.354111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:05.226 [2024-11-05 11:29:04.354122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.226 [2024-11-05 11:29:04.356393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.226 [2024-11-05 11:29:04.356511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:05.226 BaseBdev3 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.226 BaseBdev4_malloc 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.226 true 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.226 [2024-11-05 11:29:04.420801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:05.226 [2024-11-05 11:29:04.420858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.226 [2024-11-05 11:29:04.420879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:05.226 [2024-11-05 11:29:04.420889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.226 [2024-11-05 11:29:04.423078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.226 [2024-11-05 11:29:04.423118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:05.226 BaseBdev4 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.226 [2024-11-05 11:29:04.432824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:05.226 [2024-11-05 11:29:04.434570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:05.226 [2024-11-05 11:29:04.434642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:05.226 [2024-11-05 11:29:04.434708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:05.226 [2024-11-05 11:29:04.434921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:05.226 [2024-11-05 11:29:04.434935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:05.226 [2024-11-05 11:29:04.435197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:05.226 [2024-11-05 11:29:04.435371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:05.226 [2024-11-05 11:29:04.435382] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:05.226 [2024-11-05 11:29:04.435541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.226 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.227 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.227 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.227 "name": "raid_bdev1", 00:13:05.227 "uuid": "143ca8c7-8078-4e8b-990a-dae1bb524332", 00:13:05.227 "strip_size_kb": 64, 00:13:05.227 "state": "online", 00:13:05.227 "raid_level": "concat", 00:13:05.227 "superblock": true, 00:13:05.227 "num_base_bdevs": 4, 00:13:05.227 "num_base_bdevs_discovered": 4, 00:13:05.227 "num_base_bdevs_operational": 4, 00:13:05.227 "base_bdevs_list": [ 00:13:05.227 { 00:13:05.227 "name": "BaseBdev1", 00:13:05.227 "uuid": "cd96813c-3f9c-5883-9211-35701c0f60fe", 00:13:05.227 "is_configured": true, 00:13:05.227 "data_offset": 2048, 00:13:05.227 "data_size": 63488 00:13:05.227 }, 00:13:05.227 { 00:13:05.227 "name": "BaseBdev2", 00:13:05.227 "uuid": "6bd5259b-1d6e-58f6-95d5-28b26e87cd99", 00:13:05.227 "is_configured": true, 00:13:05.227 "data_offset": 2048, 00:13:05.227 "data_size": 63488 00:13:05.227 }, 00:13:05.227 { 00:13:05.227 "name": "BaseBdev3", 00:13:05.227 "uuid": "463383c3-36ff-5547-81e7-8dbbb4bf4f92", 00:13:05.227 "is_configured": true, 00:13:05.227 "data_offset": 2048, 00:13:05.227 "data_size": 63488 00:13:05.227 }, 00:13:05.227 { 00:13:05.227 "name": "BaseBdev4", 00:13:05.227 "uuid": "2885f606-9b21-5469-8ac1-680d703a5f4f", 00:13:05.227 "is_configured": true, 00:13:05.227 "data_offset": 2048, 00:13:05.227 "data_size": 63488 00:13:05.227 } 00:13:05.227 ] 00:13:05.227 }' 00:13:05.227 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.227 11:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.795 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:05.795 11:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:05.795 [2024-11-05 11:29:05.008994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.731 "name": "raid_bdev1", 00:13:06.731 "uuid": "143ca8c7-8078-4e8b-990a-dae1bb524332", 00:13:06.731 "strip_size_kb": 64, 00:13:06.731 "state": "online", 00:13:06.731 "raid_level": "concat", 00:13:06.731 "superblock": true, 00:13:06.731 "num_base_bdevs": 4, 00:13:06.731 "num_base_bdevs_discovered": 4, 00:13:06.731 "num_base_bdevs_operational": 4, 00:13:06.731 "base_bdevs_list": [ 00:13:06.731 { 00:13:06.731 "name": "BaseBdev1", 00:13:06.731 "uuid": "cd96813c-3f9c-5883-9211-35701c0f60fe", 00:13:06.731 "is_configured": true, 00:13:06.731 "data_offset": 2048, 00:13:06.731 "data_size": 63488 00:13:06.731 }, 00:13:06.731 { 00:13:06.731 "name": "BaseBdev2", 00:13:06.731 "uuid": "6bd5259b-1d6e-58f6-95d5-28b26e87cd99", 00:13:06.731 "is_configured": true, 00:13:06.731 "data_offset": 2048, 00:13:06.731 "data_size": 63488 00:13:06.731 }, 00:13:06.731 { 00:13:06.731 "name": "BaseBdev3", 00:13:06.731 "uuid": "463383c3-36ff-5547-81e7-8dbbb4bf4f92", 00:13:06.731 "is_configured": true, 00:13:06.731 "data_offset": 2048, 00:13:06.731 "data_size": 63488 00:13:06.731 }, 00:13:06.731 { 00:13:06.731 "name": "BaseBdev4", 00:13:06.731 "uuid": "2885f606-9b21-5469-8ac1-680d703a5f4f", 00:13:06.731 "is_configured": true, 00:13:06.731 "data_offset": 2048, 00:13:06.731 "data_size": 63488 00:13:06.731 } 00:13:06.731 ] 00:13:06.731 }' 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.731 11:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.300 11:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:07.300 11:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.300 11:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.300 [2024-11-05 11:29:06.393283] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:07.300 [2024-11-05 11:29:06.393320] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.300 [2024-11-05 11:29:06.396343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.300 [2024-11-05 11:29:06.396411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.300 [2024-11-05 11:29:06.396460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.300 [2024-11-05 11:29:06.396476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:07.300 { 00:13:07.300 "results": [ 00:13:07.300 { 00:13:07.300 "job": "raid_bdev1", 00:13:07.300 "core_mask": "0x1", 00:13:07.300 "workload": "randrw", 00:13:07.300 "percentage": 50, 00:13:07.300 "status": "finished", 00:13:07.300 "queue_depth": 1, 00:13:07.300 "io_size": 131072, 00:13:07.300 "runtime": 1.385043, 00:13:07.300 "iops": 15887.593381577322, 00:13:07.300 "mibps": 1985.9491726971653, 00:13:07.300 "io_failed": 1, 00:13:07.300 "io_timeout": 0, 00:13:07.300 "avg_latency_us": 87.73193813358563, 00:13:07.300 "min_latency_us": 25.6, 00:13:07.300 "max_latency_us": 1438.071615720524 00:13:07.300 } 00:13:07.300 ], 00:13:07.300 "core_count": 1 00:13:07.300 } 00:13:07.300 11:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.300 11:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73145 00:13:07.300 11:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73145 ']' 00:13:07.300 11:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73145 00:13:07.300 11:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:13:07.300 11:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:07.300 11:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73145 00:13:07.300 11:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:07.300 11:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:07.300 11:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73145' 00:13:07.300 killing process with pid 73145 00:13:07.300 11:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73145 00:13:07.300 [2024-11-05 11:29:06.434924] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.300 11:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73145 00:13:07.561 [2024-11-05 11:29:06.756218] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.939 11:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.z5UqJTVoAs 00:13:08.939 11:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:08.939 11:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:08.939 11:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:13:08.939 11:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:08.939 ************************************ 00:13:08.939 END TEST raid_write_error_test 00:13:08.939 ************************************ 00:13:08.939 11:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:08.939 11:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:08.939 11:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:13:08.939 00:13:08.939 real 0m4.699s 00:13:08.939 user 0m5.567s 00:13:08.939 sys 0m0.595s 00:13:08.939 11:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:08.939 11:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.939 11:29:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:08.939 11:29:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:08.939 11:29:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:08.939 11:29:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:08.939 11:29:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.939 ************************************ 00:13:08.939 START TEST raid_state_function_test 00:13:08.939 ************************************ 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:08.939 Process raid pid: 73289 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73289 00:13:08.939 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:08.939 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73289' 00:13:08.939 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73289 00:13:08.939 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 73289 ']' 00:13:08.940 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.940 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:08.940 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.940 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:08.940 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.940 [2024-11-05 11:29:08.081685] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:13:08.940 [2024-11-05 11:29:08.081891] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.199 [2024-11-05 11:29:08.253557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.199 [2024-11-05 11:29:08.374687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.456 [2024-11-05 11:29:08.579262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.456 [2024-11-05 11:29:08.579407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.715 [2024-11-05 11:29:08.924591] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:09.715 [2024-11-05 11:29:08.924710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:09.715 [2024-11-05 11:29:08.924741] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:09.715 [2024-11-05 11:29:08.924764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:09.715 [2024-11-05 11:29:08.924782] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:09.715 [2024-11-05 11:29:08.924803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:09.715 [2024-11-05 11:29:08.924820] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:09.715 [2024-11-05 11:29:08.924840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.715 "name": "Existed_Raid", 00:13:09.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.715 "strip_size_kb": 0, 00:13:09.715 "state": "configuring", 00:13:09.715 "raid_level": "raid1", 00:13:09.715 "superblock": false, 00:13:09.715 "num_base_bdevs": 4, 00:13:09.715 "num_base_bdevs_discovered": 0, 00:13:09.715 "num_base_bdevs_operational": 4, 00:13:09.715 "base_bdevs_list": [ 00:13:09.715 { 00:13:09.715 "name": "BaseBdev1", 00:13:09.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.715 "is_configured": false, 00:13:09.715 "data_offset": 0, 00:13:09.715 "data_size": 0 00:13:09.715 }, 00:13:09.715 { 00:13:09.715 "name": "BaseBdev2", 00:13:09.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.715 "is_configured": false, 00:13:09.715 "data_offset": 0, 00:13:09.715 "data_size": 0 00:13:09.715 }, 00:13:09.715 { 00:13:09.715 "name": "BaseBdev3", 00:13:09.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.715 "is_configured": false, 00:13:09.715 "data_offset": 0, 00:13:09.715 "data_size": 0 00:13:09.715 }, 00:13:09.715 { 00:13:09.715 "name": "BaseBdev4", 00:13:09.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.715 "is_configured": false, 00:13:09.715 "data_offset": 0, 00:13:09.715 "data_size": 0 00:13:09.715 } 00:13:09.715 ] 00:13:09.715 }' 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.715 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.282 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:10.282 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.282 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.282 [2024-11-05 11:29:09.395732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:10.282 [2024-11-05 11:29:09.395774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:10.282 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.282 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:10.282 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.282 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.282 [2024-11-05 11:29:09.407699] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:10.282 [2024-11-05 11:29:09.407745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:10.282 [2024-11-05 11:29:09.407754] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.282 [2024-11-05 11:29:09.407764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.282 [2024-11-05 11:29:09.407770] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:10.282 [2024-11-05 11:29:09.407780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:10.282 [2024-11-05 11:29:09.407786] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:10.282 [2024-11-05 11:29:09.407794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:10.282 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.283 [2024-11-05 11:29:09.458417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.283 BaseBdev1 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.283 [ 00:13:10.283 { 00:13:10.283 "name": "BaseBdev1", 00:13:10.283 "aliases": [ 00:13:10.283 "09879e7a-407e-48be-823a-a5a960082ea0" 00:13:10.283 ], 00:13:10.283 "product_name": "Malloc disk", 00:13:10.283 "block_size": 512, 00:13:10.283 "num_blocks": 65536, 00:13:10.283 "uuid": "09879e7a-407e-48be-823a-a5a960082ea0", 00:13:10.283 "assigned_rate_limits": { 00:13:10.283 "rw_ios_per_sec": 0, 00:13:10.283 "rw_mbytes_per_sec": 0, 00:13:10.283 "r_mbytes_per_sec": 0, 00:13:10.283 "w_mbytes_per_sec": 0 00:13:10.283 }, 00:13:10.283 "claimed": true, 00:13:10.283 "claim_type": "exclusive_write", 00:13:10.283 "zoned": false, 00:13:10.283 "supported_io_types": { 00:13:10.283 "read": true, 00:13:10.283 "write": true, 00:13:10.283 "unmap": true, 00:13:10.283 "flush": true, 00:13:10.283 "reset": true, 00:13:10.283 "nvme_admin": false, 00:13:10.283 "nvme_io": false, 00:13:10.283 "nvme_io_md": false, 00:13:10.283 "write_zeroes": true, 00:13:10.283 "zcopy": true, 00:13:10.283 "get_zone_info": false, 00:13:10.283 "zone_management": false, 00:13:10.283 "zone_append": false, 00:13:10.283 "compare": false, 00:13:10.283 "compare_and_write": false, 00:13:10.283 "abort": true, 00:13:10.283 "seek_hole": false, 00:13:10.283 "seek_data": false, 00:13:10.283 "copy": true, 00:13:10.283 "nvme_iov_md": false 00:13:10.283 }, 00:13:10.283 "memory_domains": [ 00:13:10.283 { 00:13:10.283 "dma_device_id": "system", 00:13:10.283 "dma_device_type": 1 00:13:10.283 }, 00:13:10.283 { 00:13:10.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.283 "dma_device_type": 2 00:13:10.283 } 00:13:10.283 ], 00:13:10.283 "driver_specific": {} 00:13:10.283 } 00:13:10.283 ] 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.283 "name": "Existed_Raid", 00:13:10.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.283 "strip_size_kb": 0, 00:13:10.283 "state": "configuring", 00:13:10.283 "raid_level": "raid1", 00:13:10.283 "superblock": false, 00:13:10.283 "num_base_bdevs": 4, 00:13:10.283 "num_base_bdevs_discovered": 1, 00:13:10.283 "num_base_bdevs_operational": 4, 00:13:10.283 "base_bdevs_list": [ 00:13:10.283 { 00:13:10.283 "name": "BaseBdev1", 00:13:10.283 "uuid": "09879e7a-407e-48be-823a-a5a960082ea0", 00:13:10.283 "is_configured": true, 00:13:10.283 "data_offset": 0, 00:13:10.283 "data_size": 65536 00:13:10.283 }, 00:13:10.283 { 00:13:10.283 "name": "BaseBdev2", 00:13:10.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.283 "is_configured": false, 00:13:10.283 "data_offset": 0, 00:13:10.283 "data_size": 0 00:13:10.283 }, 00:13:10.283 { 00:13:10.283 "name": "BaseBdev3", 00:13:10.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.283 "is_configured": false, 00:13:10.283 "data_offset": 0, 00:13:10.283 "data_size": 0 00:13:10.283 }, 00:13:10.283 { 00:13:10.283 "name": "BaseBdev4", 00:13:10.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.283 "is_configured": false, 00:13:10.283 "data_offset": 0, 00:13:10.283 "data_size": 0 00:13:10.283 } 00:13:10.283 ] 00:13:10.283 }' 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.283 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.849 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:10.849 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.849 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.849 [2024-11-05 11:29:09.981577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:10.849 [2024-11-05 11:29:09.981635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:10.849 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.849 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:10.849 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.849 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.849 [2024-11-05 11:29:09.993598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.849 [2024-11-05 11:29:09.995380] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.849 [2024-11-05 11:29:09.995471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.849 [2024-11-05 11:29:09.995486] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:10.849 [2024-11-05 11:29:09.995498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:10.849 [2024-11-05 11:29:09.995505] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:10.849 [2024-11-05 11:29:09.995513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:10.849 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.850 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:10.850 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:10.850 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:10.850 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.850 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.850 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.850 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.850 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.850 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.850 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.850 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.850 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.850 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.850 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.850 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.850 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.850 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.850 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.850 "name": "Existed_Raid", 00:13:10.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.850 "strip_size_kb": 0, 00:13:10.850 "state": "configuring", 00:13:10.850 "raid_level": "raid1", 00:13:10.850 "superblock": false, 00:13:10.850 "num_base_bdevs": 4, 00:13:10.850 "num_base_bdevs_discovered": 1, 00:13:10.850 "num_base_bdevs_operational": 4, 00:13:10.850 "base_bdevs_list": [ 00:13:10.850 { 00:13:10.850 "name": "BaseBdev1", 00:13:10.850 "uuid": "09879e7a-407e-48be-823a-a5a960082ea0", 00:13:10.850 "is_configured": true, 00:13:10.850 "data_offset": 0, 00:13:10.850 "data_size": 65536 00:13:10.850 }, 00:13:10.850 { 00:13:10.850 "name": "BaseBdev2", 00:13:10.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.850 "is_configured": false, 00:13:10.850 "data_offset": 0, 00:13:10.850 "data_size": 0 00:13:10.850 }, 00:13:10.850 { 00:13:10.850 "name": "BaseBdev3", 00:13:10.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.850 "is_configured": false, 00:13:10.850 "data_offset": 0, 00:13:10.850 "data_size": 0 00:13:10.850 }, 00:13:10.850 { 00:13:10.850 "name": "BaseBdev4", 00:13:10.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.850 "is_configured": false, 00:13:10.850 "data_offset": 0, 00:13:10.850 "data_size": 0 00:13:10.850 } 00:13:10.850 ] 00:13:10.850 }' 00:13:10.850 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.850 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.417 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:11.417 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.417 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.417 [2024-11-05 11:29:10.480436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.417 BaseBdev2 00:13:11.417 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.417 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:11.417 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:11.417 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:11.417 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:11.417 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:11.417 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:11.417 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:11.417 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.417 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.418 [ 00:13:11.418 { 00:13:11.418 "name": "BaseBdev2", 00:13:11.418 "aliases": [ 00:13:11.418 "0c99b5da-2d00-41f6-887c-3a3528b6198a" 00:13:11.418 ], 00:13:11.418 "product_name": "Malloc disk", 00:13:11.418 "block_size": 512, 00:13:11.418 "num_blocks": 65536, 00:13:11.418 "uuid": "0c99b5da-2d00-41f6-887c-3a3528b6198a", 00:13:11.418 "assigned_rate_limits": { 00:13:11.418 "rw_ios_per_sec": 0, 00:13:11.418 "rw_mbytes_per_sec": 0, 00:13:11.418 "r_mbytes_per_sec": 0, 00:13:11.418 "w_mbytes_per_sec": 0 00:13:11.418 }, 00:13:11.418 "claimed": true, 00:13:11.418 "claim_type": "exclusive_write", 00:13:11.418 "zoned": false, 00:13:11.418 "supported_io_types": { 00:13:11.418 "read": true, 00:13:11.418 "write": true, 00:13:11.418 "unmap": true, 00:13:11.418 "flush": true, 00:13:11.418 "reset": true, 00:13:11.418 "nvme_admin": false, 00:13:11.418 "nvme_io": false, 00:13:11.418 "nvme_io_md": false, 00:13:11.418 "write_zeroes": true, 00:13:11.418 "zcopy": true, 00:13:11.418 "get_zone_info": false, 00:13:11.418 "zone_management": false, 00:13:11.418 "zone_append": false, 00:13:11.418 "compare": false, 00:13:11.418 "compare_and_write": false, 00:13:11.418 "abort": true, 00:13:11.418 "seek_hole": false, 00:13:11.418 "seek_data": false, 00:13:11.418 "copy": true, 00:13:11.418 "nvme_iov_md": false 00:13:11.418 }, 00:13:11.418 "memory_domains": [ 00:13:11.418 { 00:13:11.418 "dma_device_id": "system", 00:13:11.418 "dma_device_type": 1 00:13:11.418 }, 00:13:11.418 { 00:13:11.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.418 "dma_device_type": 2 00:13:11.418 } 00:13:11.418 ], 00:13:11.418 "driver_specific": {} 00:13:11.418 } 00:13:11.418 ] 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.418 "name": "Existed_Raid", 00:13:11.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.418 "strip_size_kb": 0, 00:13:11.418 "state": "configuring", 00:13:11.418 "raid_level": "raid1", 00:13:11.418 "superblock": false, 00:13:11.418 "num_base_bdevs": 4, 00:13:11.418 "num_base_bdevs_discovered": 2, 00:13:11.418 "num_base_bdevs_operational": 4, 00:13:11.418 "base_bdevs_list": [ 00:13:11.418 { 00:13:11.418 "name": "BaseBdev1", 00:13:11.418 "uuid": "09879e7a-407e-48be-823a-a5a960082ea0", 00:13:11.418 "is_configured": true, 00:13:11.418 "data_offset": 0, 00:13:11.418 "data_size": 65536 00:13:11.418 }, 00:13:11.418 { 00:13:11.418 "name": "BaseBdev2", 00:13:11.418 "uuid": "0c99b5da-2d00-41f6-887c-3a3528b6198a", 00:13:11.418 "is_configured": true, 00:13:11.418 "data_offset": 0, 00:13:11.418 "data_size": 65536 00:13:11.418 }, 00:13:11.418 { 00:13:11.418 "name": "BaseBdev3", 00:13:11.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.418 "is_configured": false, 00:13:11.418 "data_offset": 0, 00:13:11.418 "data_size": 0 00:13:11.418 }, 00:13:11.418 { 00:13:11.418 "name": "BaseBdev4", 00:13:11.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.418 "is_configured": false, 00:13:11.418 "data_offset": 0, 00:13:11.418 "data_size": 0 00:13:11.418 } 00:13:11.418 ] 00:13:11.418 }' 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.418 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.677 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:11.677 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.677 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.936 [2024-11-05 11:29:10.993721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.936 BaseBdev3 00:13:11.936 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.936 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:11.936 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:11.936 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:11.936 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:11.936 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:11.936 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:11.936 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:11.936 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.936 11:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.936 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.936 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:11.936 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.936 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.936 [ 00:13:11.936 { 00:13:11.936 "name": "BaseBdev3", 00:13:11.936 "aliases": [ 00:13:11.936 "75002a2c-b709-4e72-8c26-b1d9d91c0cb2" 00:13:11.936 ], 00:13:11.936 "product_name": "Malloc disk", 00:13:11.936 "block_size": 512, 00:13:11.936 "num_blocks": 65536, 00:13:11.936 "uuid": "75002a2c-b709-4e72-8c26-b1d9d91c0cb2", 00:13:11.936 "assigned_rate_limits": { 00:13:11.936 "rw_ios_per_sec": 0, 00:13:11.936 "rw_mbytes_per_sec": 0, 00:13:11.936 "r_mbytes_per_sec": 0, 00:13:11.936 "w_mbytes_per_sec": 0 00:13:11.936 }, 00:13:11.936 "claimed": true, 00:13:11.936 "claim_type": "exclusive_write", 00:13:11.936 "zoned": false, 00:13:11.936 "supported_io_types": { 00:13:11.936 "read": true, 00:13:11.936 "write": true, 00:13:11.936 "unmap": true, 00:13:11.936 "flush": true, 00:13:11.936 "reset": true, 00:13:11.936 "nvme_admin": false, 00:13:11.936 "nvme_io": false, 00:13:11.936 "nvme_io_md": false, 00:13:11.936 "write_zeroes": true, 00:13:11.936 "zcopy": true, 00:13:11.936 "get_zone_info": false, 00:13:11.936 "zone_management": false, 00:13:11.936 "zone_append": false, 00:13:11.936 "compare": false, 00:13:11.936 "compare_and_write": false, 00:13:11.936 "abort": true, 00:13:11.936 "seek_hole": false, 00:13:11.936 "seek_data": false, 00:13:11.936 "copy": true, 00:13:11.936 "nvme_iov_md": false 00:13:11.936 }, 00:13:11.936 "memory_domains": [ 00:13:11.936 { 00:13:11.936 "dma_device_id": "system", 00:13:11.936 "dma_device_type": 1 00:13:11.936 }, 00:13:11.936 { 00:13:11.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.936 "dma_device_type": 2 00:13:11.936 } 00:13:11.936 ], 00:13:11.936 "driver_specific": {} 00:13:11.936 } 00:13:11.936 ] 00:13:11.936 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.936 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:11.936 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:11.936 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:11.936 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:11.936 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.936 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.936 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.936 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.936 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.936 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.937 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.937 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.937 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.937 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.937 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.937 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.937 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.937 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.937 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.937 "name": "Existed_Raid", 00:13:11.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.937 "strip_size_kb": 0, 00:13:11.937 "state": "configuring", 00:13:11.937 "raid_level": "raid1", 00:13:11.937 "superblock": false, 00:13:11.937 "num_base_bdevs": 4, 00:13:11.937 "num_base_bdevs_discovered": 3, 00:13:11.937 "num_base_bdevs_operational": 4, 00:13:11.937 "base_bdevs_list": [ 00:13:11.937 { 00:13:11.937 "name": "BaseBdev1", 00:13:11.937 "uuid": "09879e7a-407e-48be-823a-a5a960082ea0", 00:13:11.937 "is_configured": true, 00:13:11.937 "data_offset": 0, 00:13:11.937 "data_size": 65536 00:13:11.937 }, 00:13:11.937 { 00:13:11.937 "name": "BaseBdev2", 00:13:11.937 "uuid": "0c99b5da-2d00-41f6-887c-3a3528b6198a", 00:13:11.937 "is_configured": true, 00:13:11.937 "data_offset": 0, 00:13:11.937 "data_size": 65536 00:13:11.937 }, 00:13:11.937 { 00:13:11.937 "name": "BaseBdev3", 00:13:11.937 "uuid": "75002a2c-b709-4e72-8c26-b1d9d91c0cb2", 00:13:11.937 "is_configured": true, 00:13:11.937 "data_offset": 0, 00:13:11.937 "data_size": 65536 00:13:11.937 }, 00:13:11.937 { 00:13:11.937 "name": "BaseBdev4", 00:13:11.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.937 "is_configured": false, 00:13:11.937 "data_offset": 0, 00:13:11.937 "data_size": 0 00:13:11.937 } 00:13:11.937 ] 00:13:11.937 }' 00:13:11.937 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.937 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.505 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:12.505 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.505 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.505 [2024-11-05 11:29:11.540531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:12.505 [2024-11-05 11:29:11.540672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:12.505 [2024-11-05 11:29:11.540702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:12.505 [2024-11-05 11:29:11.541032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:12.506 [2024-11-05 11:29:11.541271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:12.506 [2024-11-05 11:29:11.541322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:12.506 [2024-11-05 11:29:11.541643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.506 BaseBdev4 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.506 [ 00:13:12.506 { 00:13:12.506 "name": "BaseBdev4", 00:13:12.506 "aliases": [ 00:13:12.506 "b0a303e3-6c6f-4ad1-a2a3-c7041229fb9c" 00:13:12.506 ], 00:13:12.506 "product_name": "Malloc disk", 00:13:12.506 "block_size": 512, 00:13:12.506 "num_blocks": 65536, 00:13:12.506 "uuid": "b0a303e3-6c6f-4ad1-a2a3-c7041229fb9c", 00:13:12.506 "assigned_rate_limits": { 00:13:12.506 "rw_ios_per_sec": 0, 00:13:12.506 "rw_mbytes_per_sec": 0, 00:13:12.506 "r_mbytes_per_sec": 0, 00:13:12.506 "w_mbytes_per_sec": 0 00:13:12.506 }, 00:13:12.506 "claimed": true, 00:13:12.506 "claim_type": "exclusive_write", 00:13:12.506 "zoned": false, 00:13:12.506 "supported_io_types": { 00:13:12.506 "read": true, 00:13:12.506 "write": true, 00:13:12.506 "unmap": true, 00:13:12.506 "flush": true, 00:13:12.506 "reset": true, 00:13:12.506 "nvme_admin": false, 00:13:12.506 "nvme_io": false, 00:13:12.506 "nvme_io_md": false, 00:13:12.506 "write_zeroes": true, 00:13:12.506 "zcopy": true, 00:13:12.506 "get_zone_info": false, 00:13:12.506 "zone_management": false, 00:13:12.506 "zone_append": false, 00:13:12.506 "compare": false, 00:13:12.506 "compare_and_write": false, 00:13:12.506 "abort": true, 00:13:12.506 "seek_hole": false, 00:13:12.506 "seek_data": false, 00:13:12.506 "copy": true, 00:13:12.506 "nvme_iov_md": false 00:13:12.506 }, 00:13:12.506 "memory_domains": [ 00:13:12.506 { 00:13:12.506 "dma_device_id": "system", 00:13:12.506 "dma_device_type": 1 00:13:12.506 }, 00:13:12.506 { 00:13:12.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.506 "dma_device_type": 2 00:13:12.506 } 00:13:12.506 ], 00:13:12.506 "driver_specific": {} 00:13:12.506 } 00:13:12.506 ] 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.506 "name": "Existed_Raid", 00:13:12.506 "uuid": "d541b9a4-cd24-4251-9649-f910e1301909", 00:13:12.506 "strip_size_kb": 0, 00:13:12.506 "state": "online", 00:13:12.506 "raid_level": "raid1", 00:13:12.506 "superblock": false, 00:13:12.506 "num_base_bdevs": 4, 00:13:12.506 "num_base_bdevs_discovered": 4, 00:13:12.506 "num_base_bdevs_operational": 4, 00:13:12.506 "base_bdevs_list": [ 00:13:12.506 { 00:13:12.506 "name": "BaseBdev1", 00:13:12.506 "uuid": "09879e7a-407e-48be-823a-a5a960082ea0", 00:13:12.506 "is_configured": true, 00:13:12.506 "data_offset": 0, 00:13:12.506 "data_size": 65536 00:13:12.506 }, 00:13:12.506 { 00:13:12.506 "name": "BaseBdev2", 00:13:12.506 "uuid": "0c99b5da-2d00-41f6-887c-3a3528b6198a", 00:13:12.506 "is_configured": true, 00:13:12.506 "data_offset": 0, 00:13:12.506 "data_size": 65536 00:13:12.506 }, 00:13:12.506 { 00:13:12.506 "name": "BaseBdev3", 00:13:12.506 "uuid": "75002a2c-b709-4e72-8c26-b1d9d91c0cb2", 00:13:12.506 "is_configured": true, 00:13:12.506 "data_offset": 0, 00:13:12.506 "data_size": 65536 00:13:12.506 }, 00:13:12.506 { 00:13:12.506 "name": "BaseBdev4", 00:13:12.506 "uuid": "b0a303e3-6c6f-4ad1-a2a3-c7041229fb9c", 00:13:12.506 "is_configured": true, 00:13:12.506 "data_offset": 0, 00:13:12.506 "data_size": 65536 00:13:12.506 } 00:13:12.506 ] 00:13:12.506 }' 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.506 11:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.766 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:12.766 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:12.766 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:12.766 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:12.766 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:12.766 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:12.766 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:12.766 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:12.766 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.766 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.766 [2024-11-05 11:29:12.008242] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.766 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.026 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:13.026 "name": "Existed_Raid", 00:13:13.026 "aliases": [ 00:13:13.026 "d541b9a4-cd24-4251-9649-f910e1301909" 00:13:13.026 ], 00:13:13.026 "product_name": "Raid Volume", 00:13:13.026 "block_size": 512, 00:13:13.026 "num_blocks": 65536, 00:13:13.026 "uuid": "d541b9a4-cd24-4251-9649-f910e1301909", 00:13:13.026 "assigned_rate_limits": { 00:13:13.026 "rw_ios_per_sec": 0, 00:13:13.026 "rw_mbytes_per_sec": 0, 00:13:13.026 "r_mbytes_per_sec": 0, 00:13:13.026 "w_mbytes_per_sec": 0 00:13:13.026 }, 00:13:13.026 "claimed": false, 00:13:13.026 "zoned": false, 00:13:13.026 "supported_io_types": { 00:13:13.026 "read": true, 00:13:13.026 "write": true, 00:13:13.026 "unmap": false, 00:13:13.026 "flush": false, 00:13:13.026 "reset": true, 00:13:13.026 "nvme_admin": false, 00:13:13.026 "nvme_io": false, 00:13:13.026 "nvme_io_md": false, 00:13:13.026 "write_zeroes": true, 00:13:13.026 "zcopy": false, 00:13:13.026 "get_zone_info": false, 00:13:13.026 "zone_management": false, 00:13:13.026 "zone_append": false, 00:13:13.026 "compare": false, 00:13:13.026 "compare_and_write": false, 00:13:13.026 "abort": false, 00:13:13.026 "seek_hole": false, 00:13:13.026 "seek_data": false, 00:13:13.026 "copy": false, 00:13:13.026 "nvme_iov_md": false 00:13:13.026 }, 00:13:13.026 "memory_domains": [ 00:13:13.026 { 00:13:13.026 "dma_device_id": "system", 00:13:13.026 "dma_device_type": 1 00:13:13.026 }, 00:13:13.026 { 00:13:13.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.026 "dma_device_type": 2 00:13:13.026 }, 00:13:13.026 { 00:13:13.026 "dma_device_id": "system", 00:13:13.026 "dma_device_type": 1 00:13:13.026 }, 00:13:13.026 { 00:13:13.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.026 "dma_device_type": 2 00:13:13.026 }, 00:13:13.026 { 00:13:13.026 "dma_device_id": "system", 00:13:13.026 "dma_device_type": 1 00:13:13.026 }, 00:13:13.026 { 00:13:13.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.026 "dma_device_type": 2 00:13:13.026 }, 00:13:13.026 { 00:13:13.026 "dma_device_id": "system", 00:13:13.026 "dma_device_type": 1 00:13:13.026 }, 00:13:13.026 { 00:13:13.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.026 "dma_device_type": 2 00:13:13.026 } 00:13:13.026 ], 00:13:13.026 "driver_specific": { 00:13:13.026 "raid": { 00:13:13.026 "uuid": "d541b9a4-cd24-4251-9649-f910e1301909", 00:13:13.026 "strip_size_kb": 0, 00:13:13.026 "state": "online", 00:13:13.026 "raid_level": "raid1", 00:13:13.026 "superblock": false, 00:13:13.026 "num_base_bdevs": 4, 00:13:13.026 "num_base_bdevs_discovered": 4, 00:13:13.026 "num_base_bdevs_operational": 4, 00:13:13.026 "base_bdevs_list": [ 00:13:13.026 { 00:13:13.026 "name": "BaseBdev1", 00:13:13.026 "uuid": "09879e7a-407e-48be-823a-a5a960082ea0", 00:13:13.026 "is_configured": true, 00:13:13.026 "data_offset": 0, 00:13:13.026 "data_size": 65536 00:13:13.026 }, 00:13:13.026 { 00:13:13.026 "name": "BaseBdev2", 00:13:13.026 "uuid": "0c99b5da-2d00-41f6-887c-3a3528b6198a", 00:13:13.026 "is_configured": true, 00:13:13.026 "data_offset": 0, 00:13:13.026 "data_size": 65536 00:13:13.026 }, 00:13:13.026 { 00:13:13.026 "name": "BaseBdev3", 00:13:13.026 "uuid": "75002a2c-b709-4e72-8c26-b1d9d91c0cb2", 00:13:13.026 "is_configured": true, 00:13:13.026 "data_offset": 0, 00:13:13.026 "data_size": 65536 00:13:13.026 }, 00:13:13.026 { 00:13:13.026 "name": "BaseBdev4", 00:13:13.026 "uuid": "b0a303e3-6c6f-4ad1-a2a3-c7041229fb9c", 00:13:13.026 "is_configured": true, 00:13:13.026 "data_offset": 0, 00:13:13.026 "data_size": 65536 00:13:13.026 } 00:13:13.026 ] 00:13:13.026 } 00:13:13.026 } 00:13:13.026 }' 00:13:13.026 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:13.026 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:13.026 BaseBdev2 00:13:13.026 BaseBdev3 00:13:13.026 BaseBdev4' 00:13:13.026 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.026 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:13.026 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.026 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.026 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:13.026 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.026 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.026 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.026 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.026 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.027 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.027 [2024-11-05 11:29:12.275435] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.290 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.290 "name": "Existed_Raid", 00:13:13.290 "uuid": "d541b9a4-cd24-4251-9649-f910e1301909", 00:13:13.290 "strip_size_kb": 0, 00:13:13.290 "state": "online", 00:13:13.290 "raid_level": "raid1", 00:13:13.290 "superblock": false, 00:13:13.290 "num_base_bdevs": 4, 00:13:13.290 "num_base_bdevs_discovered": 3, 00:13:13.290 "num_base_bdevs_operational": 3, 00:13:13.290 "base_bdevs_list": [ 00:13:13.290 { 00:13:13.290 "name": null, 00:13:13.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.290 "is_configured": false, 00:13:13.290 "data_offset": 0, 00:13:13.290 "data_size": 65536 00:13:13.290 }, 00:13:13.290 { 00:13:13.290 "name": "BaseBdev2", 00:13:13.290 "uuid": "0c99b5da-2d00-41f6-887c-3a3528b6198a", 00:13:13.290 "is_configured": true, 00:13:13.290 "data_offset": 0, 00:13:13.290 "data_size": 65536 00:13:13.290 }, 00:13:13.290 { 00:13:13.290 "name": "BaseBdev3", 00:13:13.290 "uuid": "75002a2c-b709-4e72-8c26-b1d9d91c0cb2", 00:13:13.290 "is_configured": true, 00:13:13.290 "data_offset": 0, 00:13:13.290 "data_size": 65536 00:13:13.290 }, 00:13:13.290 { 00:13:13.290 "name": "BaseBdev4", 00:13:13.290 "uuid": "b0a303e3-6c6f-4ad1-a2a3-c7041229fb9c", 00:13:13.290 "is_configured": true, 00:13:13.290 "data_offset": 0, 00:13:13.290 "data_size": 65536 00:13:13.290 } 00:13:13.290 ] 00:13:13.290 }' 00:13:13.291 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.291 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.556 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:13.556 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:13.556 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:13.556 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.556 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.556 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.556 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.816 [2024-11-05 11:29:12.837494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.816 11:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.816 [2024-11-05 11:29:12.964734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:13.816 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.816 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:13.816 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:13.816 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.816 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.816 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:13.816 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.816 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.076 [2024-11-05 11:29:13.115387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:14.076 [2024-11-05 11:29:13.115528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.076 [2024-11-05 11:29:13.204997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.076 [2024-11-05 11:29:13.205140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.076 [2024-11-05 11:29:13.205171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.076 BaseBdev2 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.076 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.076 [ 00:13:14.076 { 00:13:14.076 "name": "BaseBdev2", 00:13:14.076 "aliases": [ 00:13:14.076 "606930b8-94c8-4d31-bcd9-d2655dd5a8e0" 00:13:14.076 ], 00:13:14.076 "product_name": "Malloc disk", 00:13:14.076 "block_size": 512, 00:13:14.076 "num_blocks": 65536, 00:13:14.076 "uuid": "606930b8-94c8-4d31-bcd9-d2655dd5a8e0", 00:13:14.076 "assigned_rate_limits": { 00:13:14.076 "rw_ios_per_sec": 0, 00:13:14.076 "rw_mbytes_per_sec": 0, 00:13:14.076 "r_mbytes_per_sec": 0, 00:13:14.076 "w_mbytes_per_sec": 0 00:13:14.076 }, 00:13:14.076 "claimed": false, 00:13:14.076 "zoned": false, 00:13:14.076 "supported_io_types": { 00:13:14.076 "read": true, 00:13:14.076 "write": true, 00:13:14.077 "unmap": true, 00:13:14.077 "flush": true, 00:13:14.077 "reset": true, 00:13:14.077 "nvme_admin": false, 00:13:14.077 "nvme_io": false, 00:13:14.077 "nvme_io_md": false, 00:13:14.077 "write_zeroes": true, 00:13:14.077 "zcopy": true, 00:13:14.077 "get_zone_info": false, 00:13:14.077 "zone_management": false, 00:13:14.077 "zone_append": false, 00:13:14.077 "compare": false, 00:13:14.077 "compare_and_write": false, 00:13:14.077 "abort": true, 00:13:14.077 "seek_hole": false, 00:13:14.077 "seek_data": false, 00:13:14.077 "copy": true, 00:13:14.077 "nvme_iov_md": false 00:13:14.077 }, 00:13:14.077 "memory_domains": [ 00:13:14.077 { 00:13:14.077 "dma_device_id": "system", 00:13:14.077 "dma_device_type": 1 00:13:14.077 }, 00:13:14.077 { 00:13:14.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.077 "dma_device_type": 2 00:13:14.077 } 00:13:14.077 ], 00:13:14.077 "driver_specific": {} 00:13:14.077 } 00:13:14.077 ] 00:13:14.077 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.077 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:14.077 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:14.077 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:14.077 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:14.077 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.077 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.337 BaseBdev3 00:13:14.337 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.337 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:14.337 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:14.337 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:14.337 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:14.337 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:14.337 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:14.337 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:14.337 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.337 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.337 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.337 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:14.337 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.337 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.337 [ 00:13:14.337 { 00:13:14.337 "name": "BaseBdev3", 00:13:14.337 "aliases": [ 00:13:14.337 "fe2d2b30-77fa-4d50-9374-0221b084ff33" 00:13:14.337 ], 00:13:14.337 "product_name": "Malloc disk", 00:13:14.337 "block_size": 512, 00:13:14.337 "num_blocks": 65536, 00:13:14.337 "uuid": "fe2d2b30-77fa-4d50-9374-0221b084ff33", 00:13:14.337 "assigned_rate_limits": { 00:13:14.337 "rw_ios_per_sec": 0, 00:13:14.337 "rw_mbytes_per_sec": 0, 00:13:14.337 "r_mbytes_per_sec": 0, 00:13:14.337 "w_mbytes_per_sec": 0 00:13:14.337 }, 00:13:14.337 "claimed": false, 00:13:14.337 "zoned": false, 00:13:14.337 "supported_io_types": { 00:13:14.337 "read": true, 00:13:14.337 "write": true, 00:13:14.337 "unmap": true, 00:13:14.337 "flush": true, 00:13:14.337 "reset": true, 00:13:14.337 "nvme_admin": false, 00:13:14.337 "nvme_io": false, 00:13:14.337 "nvme_io_md": false, 00:13:14.337 "write_zeroes": true, 00:13:14.337 "zcopy": true, 00:13:14.337 "get_zone_info": false, 00:13:14.337 "zone_management": false, 00:13:14.337 "zone_append": false, 00:13:14.337 "compare": false, 00:13:14.337 "compare_and_write": false, 00:13:14.337 "abort": true, 00:13:14.337 "seek_hole": false, 00:13:14.337 "seek_data": false, 00:13:14.337 "copy": true, 00:13:14.337 "nvme_iov_md": false 00:13:14.337 }, 00:13:14.337 "memory_domains": [ 00:13:14.337 { 00:13:14.337 "dma_device_id": "system", 00:13:14.337 "dma_device_type": 1 00:13:14.337 }, 00:13:14.337 { 00:13:14.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.337 "dma_device_type": 2 00:13:14.337 } 00:13:14.338 ], 00:13:14.338 "driver_specific": {} 00:13:14.338 } 00:13:14.338 ] 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.338 BaseBdev4 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.338 [ 00:13:14.338 { 00:13:14.338 "name": "BaseBdev4", 00:13:14.338 "aliases": [ 00:13:14.338 "3e03f4e0-0d75-4029-a6aa-432c4c16ac81" 00:13:14.338 ], 00:13:14.338 "product_name": "Malloc disk", 00:13:14.338 "block_size": 512, 00:13:14.338 "num_blocks": 65536, 00:13:14.338 "uuid": "3e03f4e0-0d75-4029-a6aa-432c4c16ac81", 00:13:14.338 "assigned_rate_limits": { 00:13:14.338 "rw_ios_per_sec": 0, 00:13:14.338 "rw_mbytes_per_sec": 0, 00:13:14.338 "r_mbytes_per_sec": 0, 00:13:14.338 "w_mbytes_per_sec": 0 00:13:14.338 }, 00:13:14.338 "claimed": false, 00:13:14.338 "zoned": false, 00:13:14.338 "supported_io_types": { 00:13:14.338 "read": true, 00:13:14.338 "write": true, 00:13:14.338 "unmap": true, 00:13:14.338 "flush": true, 00:13:14.338 "reset": true, 00:13:14.338 "nvme_admin": false, 00:13:14.338 "nvme_io": false, 00:13:14.338 "nvme_io_md": false, 00:13:14.338 "write_zeroes": true, 00:13:14.338 "zcopy": true, 00:13:14.338 "get_zone_info": false, 00:13:14.338 "zone_management": false, 00:13:14.338 "zone_append": false, 00:13:14.338 "compare": false, 00:13:14.338 "compare_and_write": false, 00:13:14.338 "abort": true, 00:13:14.338 "seek_hole": false, 00:13:14.338 "seek_data": false, 00:13:14.338 "copy": true, 00:13:14.338 "nvme_iov_md": false 00:13:14.338 }, 00:13:14.338 "memory_domains": [ 00:13:14.338 { 00:13:14.338 "dma_device_id": "system", 00:13:14.338 "dma_device_type": 1 00:13:14.338 }, 00:13:14.338 { 00:13:14.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.338 "dma_device_type": 2 00:13:14.338 } 00:13:14.338 ], 00:13:14.338 "driver_specific": {} 00:13:14.338 } 00:13:14.338 ] 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.338 [2024-11-05 11:29:13.500527] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.338 [2024-11-05 11:29:13.500625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.338 [2024-11-05 11:29:13.500662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:14.338 [2024-11-05 11:29:13.502456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.338 [2024-11-05 11:29:13.502544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.338 "name": "Existed_Raid", 00:13:14.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.338 "strip_size_kb": 0, 00:13:14.338 "state": "configuring", 00:13:14.338 "raid_level": "raid1", 00:13:14.338 "superblock": false, 00:13:14.338 "num_base_bdevs": 4, 00:13:14.338 "num_base_bdevs_discovered": 3, 00:13:14.338 "num_base_bdevs_operational": 4, 00:13:14.338 "base_bdevs_list": [ 00:13:14.338 { 00:13:14.338 "name": "BaseBdev1", 00:13:14.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.338 "is_configured": false, 00:13:14.338 "data_offset": 0, 00:13:14.338 "data_size": 0 00:13:14.338 }, 00:13:14.338 { 00:13:14.338 "name": "BaseBdev2", 00:13:14.338 "uuid": "606930b8-94c8-4d31-bcd9-d2655dd5a8e0", 00:13:14.338 "is_configured": true, 00:13:14.338 "data_offset": 0, 00:13:14.338 "data_size": 65536 00:13:14.338 }, 00:13:14.338 { 00:13:14.338 "name": "BaseBdev3", 00:13:14.338 "uuid": "fe2d2b30-77fa-4d50-9374-0221b084ff33", 00:13:14.338 "is_configured": true, 00:13:14.338 "data_offset": 0, 00:13:14.338 "data_size": 65536 00:13:14.338 }, 00:13:14.338 { 00:13:14.338 "name": "BaseBdev4", 00:13:14.338 "uuid": "3e03f4e0-0d75-4029-a6aa-432c4c16ac81", 00:13:14.338 "is_configured": true, 00:13:14.338 "data_offset": 0, 00:13:14.338 "data_size": 65536 00:13:14.338 } 00:13:14.338 ] 00:13:14.338 }' 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.338 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.908 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:14.908 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.908 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.908 [2024-11-05 11:29:13.983773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:14.908 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.908 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:14.908 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.908 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.908 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.908 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.909 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.909 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.909 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.909 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.909 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.909 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.909 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.909 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.909 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.909 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.909 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.909 "name": "Existed_Raid", 00:13:14.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.909 "strip_size_kb": 0, 00:13:14.909 "state": "configuring", 00:13:14.909 "raid_level": "raid1", 00:13:14.909 "superblock": false, 00:13:14.909 "num_base_bdevs": 4, 00:13:14.909 "num_base_bdevs_discovered": 2, 00:13:14.909 "num_base_bdevs_operational": 4, 00:13:14.909 "base_bdevs_list": [ 00:13:14.909 { 00:13:14.909 "name": "BaseBdev1", 00:13:14.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.909 "is_configured": false, 00:13:14.909 "data_offset": 0, 00:13:14.909 "data_size": 0 00:13:14.909 }, 00:13:14.909 { 00:13:14.909 "name": null, 00:13:14.909 "uuid": "606930b8-94c8-4d31-bcd9-d2655dd5a8e0", 00:13:14.909 "is_configured": false, 00:13:14.909 "data_offset": 0, 00:13:14.909 "data_size": 65536 00:13:14.909 }, 00:13:14.909 { 00:13:14.909 "name": "BaseBdev3", 00:13:14.909 "uuid": "fe2d2b30-77fa-4d50-9374-0221b084ff33", 00:13:14.909 "is_configured": true, 00:13:14.909 "data_offset": 0, 00:13:14.909 "data_size": 65536 00:13:14.909 }, 00:13:14.909 { 00:13:14.909 "name": "BaseBdev4", 00:13:14.909 "uuid": "3e03f4e0-0d75-4029-a6aa-432c4c16ac81", 00:13:14.909 "is_configured": true, 00:13:14.909 "data_offset": 0, 00:13:14.909 "data_size": 65536 00:13:14.909 } 00:13:14.909 ] 00:13:14.909 }' 00:13:14.909 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.909 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.479 [2024-11-05 11:29:14.531072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.479 BaseBdev1 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:15.479 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.480 [ 00:13:15.480 { 00:13:15.480 "name": "BaseBdev1", 00:13:15.480 "aliases": [ 00:13:15.480 "78e4c8da-093e-4123-b6c6-dde067933ad6" 00:13:15.480 ], 00:13:15.480 "product_name": "Malloc disk", 00:13:15.480 "block_size": 512, 00:13:15.480 "num_blocks": 65536, 00:13:15.480 "uuid": "78e4c8da-093e-4123-b6c6-dde067933ad6", 00:13:15.480 "assigned_rate_limits": { 00:13:15.480 "rw_ios_per_sec": 0, 00:13:15.480 "rw_mbytes_per_sec": 0, 00:13:15.480 "r_mbytes_per_sec": 0, 00:13:15.480 "w_mbytes_per_sec": 0 00:13:15.480 }, 00:13:15.480 "claimed": true, 00:13:15.480 "claim_type": "exclusive_write", 00:13:15.480 "zoned": false, 00:13:15.480 "supported_io_types": { 00:13:15.480 "read": true, 00:13:15.480 "write": true, 00:13:15.480 "unmap": true, 00:13:15.480 "flush": true, 00:13:15.480 "reset": true, 00:13:15.480 "nvme_admin": false, 00:13:15.480 "nvme_io": false, 00:13:15.480 "nvme_io_md": false, 00:13:15.480 "write_zeroes": true, 00:13:15.480 "zcopy": true, 00:13:15.480 "get_zone_info": false, 00:13:15.480 "zone_management": false, 00:13:15.480 "zone_append": false, 00:13:15.480 "compare": false, 00:13:15.480 "compare_and_write": false, 00:13:15.480 "abort": true, 00:13:15.480 "seek_hole": false, 00:13:15.480 "seek_data": false, 00:13:15.480 "copy": true, 00:13:15.480 "nvme_iov_md": false 00:13:15.480 }, 00:13:15.480 "memory_domains": [ 00:13:15.480 { 00:13:15.480 "dma_device_id": "system", 00:13:15.480 "dma_device_type": 1 00:13:15.480 }, 00:13:15.480 { 00:13:15.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.480 "dma_device_type": 2 00:13:15.480 } 00:13:15.480 ], 00:13:15.480 "driver_specific": {} 00:13:15.480 } 00:13:15.480 ] 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.480 "name": "Existed_Raid", 00:13:15.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.480 "strip_size_kb": 0, 00:13:15.480 "state": "configuring", 00:13:15.480 "raid_level": "raid1", 00:13:15.480 "superblock": false, 00:13:15.480 "num_base_bdevs": 4, 00:13:15.480 "num_base_bdevs_discovered": 3, 00:13:15.480 "num_base_bdevs_operational": 4, 00:13:15.480 "base_bdevs_list": [ 00:13:15.480 { 00:13:15.480 "name": "BaseBdev1", 00:13:15.480 "uuid": "78e4c8da-093e-4123-b6c6-dde067933ad6", 00:13:15.480 "is_configured": true, 00:13:15.480 "data_offset": 0, 00:13:15.480 "data_size": 65536 00:13:15.480 }, 00:13:15.480 { 00:13:15.480 "name": null, 00:13:15.480 "uuid": "606930b8-94c8-4d31-bcd9-d2655dd5a8e0", 00:13:15.480 "is_configured": false, 00:13:15.480 "data_offset": 0, 00:13:15.480 "data_size": 65536 00:13:15.480 }, 00:13:15.480 { 00:13:15.480 "name": "BaseBdev3", 00:13:15.480 "uuid": "fe2d2b30-77fa-4d50-9374-0221b084ff33", 00:13:15.480 "is_configured": true, 00:13:15.480 "data_offset": 0, 00:13:15.480 "data_size": 65536 00:13:15.480 }, 00:13:15.480 { 00:13:15.480 "name": "BaseBdev4", 00:13:15.480 "uuid": "3e03f4e0-0d75-4029-a6aa-432c4c16ac81", 00:13:15.480 "is_configured": true, 00:13:15.480 "data_offset": 0, 00:13:15.480 "data_size": 65536 00:13:15.480 } 00:13:15.480 ] 00:13:15.480 }' 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.480 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.740 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.740 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.740 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.740 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:15.740 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.740 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:15.740 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:15.740 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.740 11:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.740 [2024-11-05 11:29:14.998360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:15.740 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.740 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:15.740 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.740 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.740 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.740 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.740 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.740 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.740 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.740 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.740 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.740 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.740 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.740 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.740 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.999 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.999 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.999 "name": "Existed_Raid", 00:13:15.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.999 "strip_size_kb": 0, 00:13:15.999 "state": "configuring", 00:13:15.999 "raid_level": "raid1", 00:13:15.999 "superblock": false, 00:13:15.999 "num_base_bdevs": 4, 00:13:15.999 "num_base_bdevs_discovered": 2, 00:13:15.999 "num_base_bdevs_operational": 4, 00:13:15.999 "base_bdevs_list": [ 00:13:15.999 { 00:13:15.999 "name": "BaseBdev1", 00:13:15.999 "uuid": "78e4c8da-093e-4123-b6c6-dde067933ad6", 00:13:15.999 "is_configured": true, 00:13:15.999 "data_offset": 0, 00:13:15.999 "data_size": 65536 00:13:15.999 }, 00:13:15.999 { 00:13:15.999 "name": null, 00:13:15.999 "uuid": "606930b8-94c8-4d31-bcd9-d2655dd5a8e0", 00:13:15.999 "is_configured": false, 00:13:15.999 "data_offset": 0, 00:13:15.999 "data_size": 65536 00:13:15.999 }, 00:13:15.999 { 00:13:15.999 "name": null, 00:13:15.999 "uuid": "fe2d2b30-77fa-4d50-9374-0221b084ff33", 00:13:15.999 "is_configured": false, 00:13:15.999 "data_offset": 0, 00:13:15.999 "data_size": 65536 00:13:15.999 }, 00:13:15.999 { 00:13:15.999 "name": "BaseBdev4", 00:13:15.999 "uuid": "3e03f4e0-0d75-4029-a6aa-432c4c16ac81", 00:13:15.999 "is_configured": true, 00:13:15.999 "data_offset": 0, 00:13:15.999 "data_size": 65536 00:13:15.999 } 00:13:15.999 ] 00:13:15.999 }' 00:13:15.999 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.999 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.259 [2024-11-05 11:29:15.461541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.259 "name": "Existed_Raid", 00:13:16.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.259 "strip_size_kb": 0, 00:13:16.259 "state": "configuring", 00:13:16.259 "raid_level": "raid1", 00:13:16.259 "superblock": false, 00:13:16.259 "num_base_bdevs": 4, 00:13:16.259 "num_base_bdevs_discovered": 3, 00:13:16.259 "num_base_bdevs_operational": 4, 00:13:16.259 "base_bdevs_list": [ 00:13:16.259 { 00:13:16.259 "name": "BaseBdev1", 00:13:16.259 "uuid": "78e4c8da-093e-4123-b6c6-dde067933ad6", 00:13:16.259 "is_configured": true, 00:13:16.259 "data_offset": 0, 00:13:16.259 "data_size": 65536 00:13:16.259 }, 00:13:16.259 { 00:13:16.259 "name": null, 00:13:16.259 "uuid": "606930b8-94c8-4d31-bcd9-d2655dd5a8e0", 00:13:16.259 "is_configured": false, 00:13:16.259 "data_offset": 0, 00:13:16.259 "data_size": 65536 00:13:16.259 }, 00:13:16.259 { 00:13:16.259 "name": "BaseBdev3", 00:13:16.259 "uuid": "fe2d2b30-77fa-4d50-9374-0221b084ff33", 00:13:16.259 "is_configured": true, 00:13:16.259 "data_offset": 0, 00:13:16.259 "data_size": 65536 00:13:16.259 }, 00:13:16.259 { 00:13:16.259 "name": "BaseBdev4", 00:13:16.259 "uuid": "3e03f4e0-0d75-4029-a6aa-432c4c16ac81", 00:13:16.259 "is_configured": true, 00:13:16.259 "data_offset": 0, 00:13:16.259 "data_size": 65536 00:13:16.259 } 00:13:16.259 ] 00:13:16.259 }' 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.259 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.829 [2024-11-05 11:29:15.900807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.829 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.829 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.829 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.829 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.829 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.829 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.829 "name": "Existed_Raid", 00:13:16.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.829 "strip_size_kb": 0, 00:13:16.829 "state": "configuring", 00:13:16.829 "raid_level": "raid1", 00:13:16.829 "superblock": false, 00:13:16.829 "num_base_bdevs": 4, 00:13:16.829 "num_base_bdevs_discovered": 2, 00:13:16.829 "num_base_bdevs_operational": 4, 00:13:16.829 "base_bdevs_list": [ 00:13:16.829 { 00:13:16.829 "name": null, 00:13:16.829 "uuid": "78e4c8da-093e-4123-b6c6-dde067933ad6", 00:13:16.829 "is_configured": false, 00:13:16.829 "data_offset": 0, 00:13:16.829 "data_size": 65536 00:13:16.829 }, 00:13:16.829 { 00:13:16.829 "name": null, 00:13:16.829 "uuid": "606930b8-94c8-4d31-bcd9-d2655dd5a8e0", 00:13:16.829 "is_configured": false, 00:13:16.829 "data_offset": 0, 00:13:16.829 "data_size": 65536 00:13:16.829 }, 00:13:16.829 { 00:13:16.829 "name": "BaseBdev3", 00:13:16.829 "uuid": "fe2d2b30-77fa-4d50-9374-0221b084ff33", 00:13:16.829 "is_configured": true, 00:13:16.829 "data_offset": 0, 00:13:16.829 "data_size": 65536 00:13:16.829 }, 00:13:16.829 { 00:13:16.829 "name": "BaseBdev4", 00:13:16.829 "uuid": "3e03f4e0-0d75-4029-a6aa-432c4c16ac81", 00:13:16.829 "is_configured": true, 00:13:16.829 "data_offset": 0, 00:13:16.829 "data_size": 65536 00:13:16.829 } 00:13:16.829 ] 00:13:16.829 }' 00:13:16.829 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.829 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.400 [2024-11-05 11:29:16.476679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.400 "name": "Existed_Raid", 00:13:17.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.400 "strip_size_kb": 0, 00:13:17.400 "state": "configuring", 00:13:17.400 "raid_level": "raid1", 00:13:17.400 "superblock": false, 00:13:17.400 "num_base_bdevs": 4, 00:13:17.400 "num_base_bdevs_discovered": 3, 00:13:17.400 "num_base_bdevs_operational": 4, 00:13:17.400 "base_bdevs_list": [ 00:13:17.400 { 00:13:17.400 "name": null, 00:13:17.400 "uuid": "78e4c8da-093e-4123-b6c6-dde067933ad6", 00:13:17.400 "is_configured": false, 00:13:17.400 "data_offset": 0, 00:13:17.400 "data_size": 65536 00:13:17.400 }, 00:13:17.400 { 00:13:17.400 "name": "BaseBdev2", 00:13:17.400 "uuid": "606930b8-94c8-4d31-bcd9-d2655dd5a8e0", 00:13:17.400 "is_configured": true, 00:13:17.400 "data_offset": 0, 00:13:17.400 "data_size": 65536 00:13:17.400 }, 00:13:17.400 { 00:13:17.400 "name": "BaseBdev3", 00:13:17.400 "uuid": "fe2d2b30-77fa-4d50-9374-0221b084ff33", 00:13:17.400 "is_configured": true, 00:13:17.400 "data_offset": 0, 00:13:17.400 "data_size": 65536 00:13:17.400 }, 00:13:17.400 { 00:13:17.400 "name": "BaseBdev4", 00:13:17.400 "uuid": "3e03f4e0-0d75-4029-a6aa-432c4c16ac81", 00:13:17.400 "is_configured": true, 00:13:17.400 "data_offset": 0, 00:13:17.400 "data_size": 65536 00:13:17.400 } 00:13:17.400 ] 00:13:17.400 }' 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.400 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.660 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.660 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:17.660 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.660 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.660 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.920 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:17.920 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.920 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.920 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:17.920 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.920 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 78e4c8da-093e-4123-b6c6-dde067933ad6 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.920 [2024-11-05 11:29:17.040015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:17.920 [2024-11-05 11:29:17.040058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:17.920 [2024-11-05 11:29:17.040067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:17.920 [2024-11-05 11:29:17.040382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:17.920 [2024-11-05 11:29:17.040575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:17.920 [2024-11-05 11:29:17.040586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:17.920 [2024-11-05 11:29:17.040823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.920 NewBaseBdev 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.920 [ 00:13:17.920 { 00:13:17.920 "name": "NewBaseBdev", 00:13:17.920 "aliases": [ 00:13:17.920 "78e4c8da-093e-4123-b6c6-dde067933ad6" 00:13:17.920 ], 00:13:17.920 "product_name": "Malloc disk", 00:13:17.920 "block_size": 512, 00:13:17.920 "num_blocks": 65536, 00:13:17.920 "uuid": "78e4c8da-093e-4123-b6c6-dde067933ad6", 00:13:17.920 "assigned_rate_limits": { 00:13:17.920 "rw_ios_per_sec": 0, 00:13:17.920 "rw_mbytes_per_sec": 0, 00:13:17.920 "r_mbytes_per_sec": 0, 00:13:17.920 "w_mbytes_per_sec": 0 00:13:17.920 }, 00:13:17.920 "claimed": true, 00:13:17.920 "claim_type": "exclusive_write", 00:13:17.920 "zoned": false, 00:13:17.920 "supported_io_types": { 00:13:17.920 "read": true, 00:13:17.920 "write": true, 00:13:17.920 "unmap": true, 00:13:17.920 "flush": true, 00:13:17.920 "reset": true, 00:13:17.920 "nvme_admin": false, 00:13:17.920 "nvme_io": false, 00:13:17.920 "nvme_io_md": false, 00:13:17.920 "write_zeroes": true, 00:13:17.920 "zcopy": true, 00:13:17.920 "get_zone_info": false, 00:13:17.920 "zone_management": false, 00:13:17.920 "zone_append": false, 00:13:17.920 "compare": false, 00:13:17.920 "compare_and_write": false, 00:13:17.920 "abort": true, 00:13:17.920 "seek_hole": false, 00:13:17.920 "seek_data": false, 00:13:17.920 "copy": true, 00:13:17.920 "nvme_iov_md": false 00:13:17.920 }, 00:13:17.920 "memory_domains": [ 00:13:17.920 { 00:13:17.920 "dma_device_id": "system", 00:13:17.920 "dma_device_type": 1 00:13:17.920 }, 00:13:17.920 { 00:13:17.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.920 "dma_device_type": 2 00:13:17.920 } 00:13:17.920 ], 00:13:17.920 "driver_specific": {} 00:13:17.920 } 00:13:17.920 ] 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.920 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.921 "name": "Existed_Raid", 00:13:17.921 "uuid": "e8d24b69-e060-489a-a748-76e86f163d93", 00:13:17.921 "strip_size_kb": 0, 00:13:17.921 "state": "online", 00:13:17.921 "raid_level": "raid1", 00:13:17.921 "superblock": false, 00:13:17.921 "num_base_bdevs": 4, 00:13:17.921 "num_base_bdevs_discovered": 4, 00:13:17.921 "num_base_bdevs_operational": 4, 00:13:17.921 "base_bdevs_list": [ 00:13:17.921 { 00:13:17.921 "name": "NewBaseBdev", 00:13:17.921 "uuid": "78e4c8da-093e-4123-b6c6-dde067933ad6", 00:13:17.921 "is_configured": true, 00:13:17.921 "data_offset": 0, 00:13:17.921 "data_size": 65536 00:13:17.921 }, 00:13:17.921 { 00:13:17.921 "name": "BaseBdev2", 00:13:17.921 "uuid": "606930b8-94c8-4d31-bcd9-d2655dd5a8e0", 00:13:17.921 "is_configured": true, 00:13:17.921 "data_offset": 0, 00:13:17.921 "data_size": 65536 00:13:17.921 }, 00:13:17.921 { 00:13:17.921 "name": "BaseBdev3", 00:13:17.921 "uuid": "fe2d2b30-77fa-4d50-9374-0221b084ff33", 00:13:17.921 "is_configured": true, 00:13:17.921 "data_offset": 0, 00:13:17.921 "data_size": 65536 00:13:17.921 }, 00:13:17.921 { 00:13:17.921 "name": "BaseBdev4", 00:13:17.921 "uuid": "3e03f4e0-0d75-4029-a6aa-432c4c16ac81", 00:13:17.921 "is_configured": true, 00:13:17.921 "data_offset": 0, 00:13:17.921 "data_size": 65536 00:13:17.921 } 00:13:17.921 ] 00:13:17.921 }' 00:13:17.921 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.921 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.490 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:18.490 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:18.490 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:18.490 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:18.490 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:18.490 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:18.490 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:18.490 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:18.490 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.490 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.490 [2024-11-05 11:29:17.523662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.490 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.490 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:18.490 "name": "Existed_Raid", 00:13:18.490 "aliases": [ 00:13:18.490 "e8d24b69-e060-489a-a748-76e86f163d93" 00:13:18.490 ], 00:13:18.490 "product_name": "Raid Volume", 00:13:18.490 "block_size": 512, 00:13:18.490 "num_blocks": 65536, 00:13:18.490 "uuid": "e8d24b69-e060-489a-a748-76e86f163d93", 00:13:18.490 "assigned_rate_limits": { 00:13:18.490 "rw_ios_per_sec": 0, 00:13:18.490 "rw_mbytes_per_sec": 0, 00:13:18.490 "r_mbytes_per_sec": 0, 00:13:18.490 "w_mbytes_per_sec": 0 00:13:18.490 }, 00:13:18.490 "claimed": false, 00:13:18.490 "zoned": false, 00:13:18.490 "supported_io_types": { 00:13:18.490 "read": true, 00:13:18.490 "write": true, 00:13:18.490 "unmap": false, 00:13:18.490 "flush": false, 00:13:18.490 "reset": true, 00:13:18.490 "nvme_admin": false, 00:13:18.490 "nvme_io": false, 00:13:18.490 "nvme_io_md": false, 00:13:18.490 "write_zeroes": true, 00:13:18.490 "zcopy": false, 00:13:18.490 "get_zone_info": false, 00:13:18.490 "zone_management": false, 00:13:18.490 "zone_append": false, 00:13:18.490 "compare": false, 00:13:18.490 "compare_and_write": false, 00:13:18.490 "abort": false, 00:13:18.490 "seek_hole": false, 00:13:18.490 "seek_data": false, 00:13:18.490 "copy": false, 00:13:18.490 "nvme_iov_md": false 00:13:18.490 }, 00:13:18.490 "memory_domains": [ 00:13:18.490 { 00:13:18.490 "dma_device_id": "system", 00:13:18.490 "dma_device_type": 1 00:13:18.490 }, 00:13:18.490 { 00:13:18.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.490 "dma_device_type": 2 00:13:18.490 }, 00:13:18.490 { 00:13:18.490 "dma_device_id": "system", 00:13:18.490 "dma_device_type": 1 00:13:18.490 }, 00:13:18.490 { 00:13:18.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.490 "dma_device_type": 2 00:13:18.490 }, 00:13:18.490 { 00:13:18.490 "dma_device_id": "system", 00:13:18.490 "dma_device_type": 1 00:13:18.490 }, 00:13:18.490 { 00:13:18.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.490 "dma_device_type": 2 00:13:18.490 }, 00:13:18.490 { 00:13:18.490 "dma_device_id": "system", 00:13:18.490 "dma_device_type": 1 00:13:18.490 }, 00:13:18.490 { 00:13:18.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.490 "dma_device_type": 2 00:13:18.490 } 00:13:18.490 ], 00:13:18.490 "driver_specific": { 00:13:18.490 "raid": { 00:13:18.490 "uuid": "e8d24b69-e060-489a-a748-76e86f163d93", 00:13:18.490 "strip_size_kb": 0, 00:13:18.490 "state": "online", 00:13:18.490 "raid_level": "raid1", 00:13:18.490 "superblock": false, 00:13:18.490 "num_base_bdevs": 4, 00:13:18.490 "num_base_bdevs_discovered": 4, 00:13:18.490 "num_base_bdevs_operational": 4, 00:13:18.490 "base_bdevs_list": [ 00:13:18.490 { 00:13:18.490 "name": "NewBaseBdev", 00:13:18.490 "uuid": "78e4c8da-093e-4123-b6c6-dde067933ad6", 00:13:18.490 "is_configured": true, 00:13:18.490 "data_offset": 0, 00:13:18.490 "data_size": 65536 00:13:18.490 }, 00:13:18.490 { 00:13:18.490 "name": "BaseBdev2", 00:13:18.490 "uuid": "606930b8-94c8-4d31-bcd9-d2655dd5a8e0", 00:13:18.490 "is_configured": true, 00:13:18.490 "data_offset": 0, 00:13:18.490 "data_size": 65536 00:13:18.490 }, 00:13:18.490 { 00:13:18.490 "name": "BaseBdev3", 00:13:18.490 "uuid": "fe2d2b30-77fa-4d50-9374-0221b084ff33", 00:13:18.490 "is_configured": true, 00:13:18.490 "data_offset": 0, 00:13:18.490 "data_size": 65536 00:13:18.490 }, 00:13:18.490 { 00:13:18.490 "name": "BaseBdev4", 00:13:18.490 "uuid": "3e03f4e0-0d75-4029-a6aa-432c4c16ac81", 00:13:18.490 "is_configured": true, 00:13:18.490 "data_offset": 0, 00:13:18.490 "data_size": 65536 00:13:18.490 } 00:13:18.490 ] 00:13:18.490 } 00:13:18.490 } 00:13:18.490 }' 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:18.491 BaseBdev2 00:13:18.491 BaseBdev3 00:13:18.491 BaseBdev4' 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.491 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.751 [2024-11-05 11:29:17.862664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:18.751 [2024-11-05 11:29:17.862694] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.751 [2024-11-05 11:29:17.862783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.751 [2024-11-05 11:29:17.863084] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.751 [2024-11-05 11:29:17.863100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73289 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 73289 ']' 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 73289 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73289 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73289' 00:13:18.751 killing process with pid 73289 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 73289 00:13:18.751 [2024-11-05 11:29:17.912678] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.751 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 73289 00:13:19.320 [2024-11-05 11:29:18.303679] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:20.257 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:20.257 ************************************ 00:13:20.257 END TEST raid_state_function_test 00:13:20.257 ************************************ 00:13:20.257 00:13:20.257 real 0m11.411s 00:13:20.257 user 0m18.094s 00:13:20.257 sys 0m2.084s 00:13:20.257 11:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:20.257 11:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.257 11:29:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:20.257 11:29:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:20.257 11:29:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:20.257 11:29:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:20.257 ************************************ 00:13:20.257 START TEST raid_state_function_test_sb 00:13:20.257 ************************************ 00:13:20.257 11:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:13:20.257 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:20.257 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:20.257 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:20.257 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:20.257 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:20.257 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.257 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73960 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73960' 00:13:20.258 Process raid pid: 73960 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73960 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 73960 ']' 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:20.258 11:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.517 [2024-11-05 11:29:19.560275] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:13:20.517 [2024-11-05 11:29:19.560480] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.517 [2024-11-05 11:29:19.731480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.776 [2024-11-05 11:29:19.847643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.035 [2024-11-05 11:29:20.054109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.035 [2024-11-05 11:29:20.054160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 [2024-11-05 11:29:20.404833] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.294 [2024-11-05 11:29:20.404969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.294 [2024-11-05 11:29:20.404984] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.294 [2024-11-05 11:29:20.404994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.294 [2024-11-05 11:29:20.405001] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.294 [2024-11-05 11:29:20.405010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.294 [2024-11-05 11:29:20.405021] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:21.294 [2024-11-05 11:29:20.405030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.294 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.294 "name": "Existed_Raid", 00:13:21.294 "uuid": "a8f34baf-097e-4c6c-aede-aff1a6a5691a", 00:13:21.294 "strip_size_kb": 0, 00:13:21.294 "state": "configuring", 00:13:21.294 "raid_level": "raid1", 00:13:21.294 "superblock": true, 00:13:21.294 "num_base_bdevs": 4, 00:13:21.294 "num_base_bdevs_discovered": 0, 00:13:21.294 "num_base_bdevs_operational": 4, 00:13:21.294 "base_bdevs_list": [ 00:13:21.294 { 00:13:21.294 "name": "BaseBdev1", 00:13:21.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.294 "is_configured": false, 00:13:21.294 "data_offset": 0, 00:13:21.294 "data_size": 0 00:13:21.294 }, 00:13:21.294 { 00:13:21.294 "name": "BaseBdev2", 00:13:21.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.294 "is_configured": false, 00:13:21.294 "data_offset": 0, 00:13:21.295 "data_size": 0 00:13:21.295 }, 00:13:21.295 { 00:13:21.295 "name": "BaseBdev3", 00:13:21.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.295 "is_configured": false, 00:13:21.295 "data_offset": 0, 00:13:21.295 "data_size": 0 00:13:21.295 }, 00:13:21.295 { 00:13:21.295 "name": "BaseBdev4", 00:13:21.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.295 "is_configured": false, 00:13:21.295 "data_offset": 0, 00:13:21.295 "data_size": 0 00:13:21.295 } 00:13:21.295 ] 00:13:21.295 }' 00:13:21.295 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.295 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.863 [2024-11-05 11:29:20.919930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.863 [2024-11-05 11:29:20.920052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.863 [2024-11-05 11:29:20.931904] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.863 [2024-11-05 11:29:20.931989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.863 [2024-11-05 11:29:20.932017] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.863 [2024-11-05 11:29:20.932041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.863 [2024-11-05 11:29:20.932059] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.863 [2024-11-05 11:29:20.932081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.863 [2024-11-05 11:29:20.932099] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:21.863 [2024-11-05 11:29:20.932121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.863 [2024-11-05 11:29:20.978228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.863 BaseBdev1 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.863 11:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.863 [ 00:13:21.863 { 00:13:21.863 "name": "BaseBdev1", 00:13:21.863 "aliases": [ 00:13:21.863 "ce9a71aa-7ab1-4f5e-ae23-618522a0f804" 00:13:21.863 ], 00:13:21.863 "product_name": "Malloc disk", 00:13:21.863 "block_size": 512, 00:13:21.863 "num_blocks": 65536, 00:13:21.863 "uuid": "ce9a71aa-7ab1-4f5e-ae23-618522a0f804", 00:13:21.863 "assigned_rate_limits": { 00:13:21.863 "rw_ios_per_sec": 0, 00:13:21.863 "rw_mbytes_per_sec": 0, 00:13:21.863 "r_mbytes_per_sec": 0, 00:13:21.863 "w_mbytes_per_sec": 0 00:13:21.863 }, 00:13:21.863 "claimed": true, 00:13:21.863 "claim_type": "exclusive_write", 00:13:21.863 "zoned": false, 00:13:21.863 "supported_io_types": { 00:13:21.863 "read": true, 00:13:21.863 "write": true, 00:13:21.863 "unmap": true, 00:13:21.863 "flush": true, 00:13:21.863 "reset": true, 00:13:21.863 "nvme_admin": false, 00:13:21.863 "nvme_io": false, 00:13:21.863 "nvme_io_md": false, 00:13:21.864 "write_zeroes": true, 00:13:21.864 "zcopy": true, 00:13:21.864 "get_zone_info": false, 00:13:21.864 "zone_management": false, 00:13:21.864 "zone_append": false, 00:13:21.864 "compare": false, 00:13:21.864 "compare_and_write": false, 00:13:21.864 "abort": true, 00:13:21.864 "seek_hole": false, 00:13:21.864 "seek_data": false, 00:13:21.864 "copy": true, 00:13:21.864 "nvme_iov_md": false 00:13:21.864 }, 00:13:21.864 "memory_domains": [ 00:13:21.864 { 00:13:21.864 "dma_device_id": "system", 00:13:21.864 "dma_device_type": 1 00:13:21.864 }, 00:13:21.864 { 00:13:21.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.864 "dma_device_type": 2 00:13:21.864 } 00:13:21.864 ], 00:13:21.864 "driver_specific": {} 00:13:21.864 } 00:13:21.864 ] 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.864 "name": "Existed_Raid", 00:13:21.864 "uuid": "d7870a30-95fb-4440-bff0-ee581bca8678", 00:13:21.864 "strip_size_kb": 0, 00:13:21.864 "state": "configuring", 00:13:21.864 "raid_level": "raid1", 00:13:21.864 "superblock": true, 00:13:21.864 "num_base_bdevs": 4, 00:13:21.864 "num_base_bdevs_discovered": 1, 00:13:21.864 "num_base_bdevs_operational": 4, 00:13:21.864 "base_bdevs_list": [ 00:13:21.864 { 00:13:21.864 "name": "BaseBdev1", 00:13:21.864 "uuid": "ce9a71aa-7ab1-4f5e-ae23-618522a0f804", 00:13:21.864 "is_configured": true, 00:13:21.864 "data_offset": 2048, 00:13:21.864 "data_size": 63488 00:13:21.864 }, 00:13:21.864 { 00:13:21.864 "name": "BaseBdev2", 00:13:21.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.864 "is_configured": false, 00:13:21.864 "data_offset": 0, 00:13:21.864 "data_size": 0 00:13:21.864 }, 00:13:21.864 { 00:13:21.864 "name": "BaseBdev3", 00:13:21.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.864 "is_configured": false, 00:13:21.864 "data_offset": 0, 00:13:21.864 "data_size": 0 00:13:21.864 }, 00:13:21.864 { 00:13:21.864 "name": "BaseBdev4", 00:13:21.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.864 "is_configured": false, 00:13:21.864 "data_offset": 0, 00:13:21.864 "data_size": 0 00:13:21.864 } 00:13:21.864 ] 00:13:21.864 }' 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.864 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.432 [2024-11-05 11:29:21.437470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.432 [2024-11-05 11:29:21.437529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.432 [2024-11-05 11:29:21.449495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.432 [2024-11-05 11:29:21.451424] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.432 [2024-11-05 11:29:21.451470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.432 [2024-11-05 11:29:21.451481] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.432 [2024-11-05 11:29:21.451493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.432 [2024-11-05 11:29:21.451501] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:22.432 [2024-11-05 11:29:21.451510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.432 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.432 "name": "Existed_Raid", 00:13:22.432 "uuid": "40e776fd-4b2d-44aa-a63e-e57dd73baecc", 00:13:22.432 "strip_size_kb": 0, 00:13:22.432 "state": "configuring", 00:13:22.432 "raid_level": "raid1", 00:13:22.432 "superblock": true, 00:13:22.432 "num_base_bdevs": 4, 00:13:22.432 "num_base_bdevs_discovered": 1, 00:13:22.432 "num_base_bdevs_operational": 4, 00:13:22.432 "base_bdevs_list": [ 00:13:22.432 { 00:13:22.432 "name": "BaseBdev1", 00:13:22.432 "uuid": "ce9a71aa-7ab1-4f5e-ae23-618522a0f804", 00:13:22.432 "is_configured": true, 00:13:22.432 "data_offset": 2048, 00:13:22.432 "data_size": 63488 00:13:22.432 }, 00:13:22.432 { 00:13:22.432 "name": "BaseBdev2", 00:13:22.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.432 "is_configured": false, 00:13:22.432 "data_offset": 0, 00:13:22.432 "data_size": 0 00:13:22.432 }, 00:13:22.432 { 00:13:22.432 "name": "BaseBdev3", 00:13:22.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.432 "is_configured": false, 00:13:22.432 "data_offset": 0, 00:13:22.432 "data_size": 0 00:13:22.432 }, 00:13:22.432 { 00:13:22.432 "name": "BaseBdev4", 00:13:22.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.432 "is_configured": false, 00:13:22.432 "data_offset": 0, 00:13:22.432 "data_size": 0 00:13:22.432 } 00:13:22.432 ] 00:13:22.432 }' 00:13:22.433 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.433 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.691 [2024-11-05 11:29:21.917399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.691 BaseBdev2 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.691 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.691 [ 00:13:22.691 { 00:13:22.691 "name": "BaseBdev2", 00:13:22.691 "aliases": [ 00:13:22.691 "ca42f05e-35e2-4a67-b49a-c8ec2bf8d960" 00:13:22.691 ], 00:13:22.691 "product_name": "Malloc disk", 00:13:22.691 "block_size": 512, 00:13:22.691 "num_blocks": 65536, 00:13:22.691 "uuid": "ca42f05e-35e2-4a67-b49a-c8ec2bf8d960", 00:13:22.691 "assigned_rate_limits": { 00:13:22.691 "rw_ios_per_sec": 0, 00:13:22.691 "rw_mbytes_per_sec": 0, 00:13:22.691 "r_mbytes_per_sec": 0, 00:13:22.691 "w_mbytes_per_sec": 0 00:13:22.691 }, 00:13:22.691 "claimed": true, 00:13:22.691 "claim_type": "exclusive_write", 00:13:22.691 "zoned": false, 00:13:22.691 "supported_io_types": { 00:13:22.691 "read": true, 00:13:22.691 "write": true, 00:13:22.691 "unmap": true, 00:13:22.691 "flush": true, 00:13:22.691 "reset": true, 00:13:22.691 "nvme_admin": false, 00:13:22.691 "nvme_io": false, 00:13:22.691 "nvme_io_md": false, 00:13:22.691 "write_zeroes": true, 00:13:22.691 "zcopy": true, 00:13:22.691 "get_zone_info": false, 00:13:22.691 "zone_management": false, 00:13:22.691 "zone_append": false, 00:13:22.691 "compare": false, 00:13:22.691 "compare_and_write": false, 00:13:22.691 "abort": true, 00:13:22.692 "seek_hole": false, 00:13:22.692 "seek_data": false, 00:13:22.692 "copy": true, 00:13:22.692 "nvme_iov_md": false 00:13:22.692 }, 00:13:22.692 "memory_domains": [ 00:13:22.692 { 00:13:22.692 "dma_device_id": "system", 00:13:22.692 "dma_device_type": 1 00:13:22.692 }, 00:13:22.692 { 00:13:22.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.692 "dma_device_type": 2 00:13:22.692 } 00:13:22.692 ], 00:13:22.692 "driver_specific": {} 00:13:22.692 } 00:13:22.692 ] 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.692 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.951 11:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.951 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.951 "name": "Existed_Raid", 00:13:22.951 "uuid": "40e776fd-4b2d-44aa-a63e-e57dd73baecc", 00:13:22.951 "strip_size_kb": 0, 00:13:22.951 "state": "configuring", 00:13:22.951 "raid_level": "raid1", 00:13:22.951 "superblock": true, 00:13:22.951 "num_base_bdevs": 4, 00:13:22.951 "num_base_bdevs_discovered": 2, 00:13:22.951 "num_base_bdevs_operational": 4, 00:13:22.951 "base_bdevs_list": [ 00:13:22.951 { 00:13:22.951 "name": "BaseBdev1", 00:13:22.951 "uuid": "ce9a71aa-7ab1-4f5e-ae23-618522a0f804", 00:13:22.951 "is_configured": true, 00:13:22.951 "data_offset": 2048, 00:13:22.951 "data_size": 63488 00:13:22.951 }, 00:13:22.951 { 00:13:22.951 "name": "BaseBdev2", 00:13:22.951 "uuid": "ca42f05e-35e2-4a67-b49a-c8ec2bf8d960", 00:13:22.951 "is_configured": true, 00:13:22.951 "data_offset": 2048, 00:13:22.951 "data_size": 63488 00:13:22.951 }, 00:13:22.951 { 00:13:22.951 "name": "BaseBdev3", 00:13:22.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.951 "is_configured": false, 00:13:22.951 "data_offset": 0, 00:13:22.951 "data_size": 0 00:13:22.951 }, 00:13:22.951 { 00:13:22.951 "name": "BaseBdev4", 00:13:22.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.951 "is_configured": false, 00:13:22.951 "data_offset": 0, 00:13:22.951 "data_size": 0 00:13:22.951 } 00:13:22.951 ] 00:13:22.951 }' 00:13:22.951 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.951 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.210 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:23.210 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.210 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.210 [2024-11-05 11:29:22.446587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.210 BaseBdev3 00:13:23.210 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.210 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:23.210 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:23.210 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:23.210 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:23.210 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:23.210 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:23.211 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:23.211 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.211 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.211 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.211 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:23.211 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.211 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.211 [ 00:13:23.211 { 00:13:23.211 "name": "BaseBdev3", 00:13:23.211 "aliases": [ 00:13:23.211 "6612e193-89e0-402a-8e96-ad8dae34d594" 00:13:23.211 ], 00:13:23.211 "product_name": "Malloc disk", 00:13:23.211 "block_size": 512, 00:13:23.211 "num_blocks": 65536, 00:13:23.211 "uuid": "6612e193-89e0-402a-8e96-ad8dae34d594", 00:13:23.211 "assigned_rate_limits": { 00:13:23.211 "rw_ios_per_sec": 0, 00:13:23.211 "rw_mbytes_per_sec": 0, 00:13:23.211 "r_mbytes_per_sec": 0, 00:13:23.211 "w_mbytes_per_sec": 0 00:13:23.211 }, 00:13:23.211 "claimed": true, 00:13:23.211 "claim_type": "exclusive_write", 00:13:23.211 "zoned": false, 00:13:23.211 "supported_io_types": { 00:13:23.211 "read": true, 00:13:23.211 "write": true, 00:13:23.211 "unmap": true, 00:13:23.211 "flush": true, 00:13:23.211 "reset": true, 00:13:23.211 "nvme_admin": false, 00:13:23.211 "nvme_io": false, 00:13:23.211 "nvme_io_md": false, 00:13:23.211 "write_zeroes": true, 00:13:23.211 "zcopy": true, 00:13:23.211 "get_zone_info": false, 00:13:23.211 "zone_management": false, 00:13:23.211 "zone_append": false, 00:13:23.211 "compare": false, 00:13:23.211 "compare_and_write": false, 00:13:23.211 "abort": true, 00:13:23.211 "seek_hole": false, 00:13:23.211 "seek_data": false, 00:13:23.211 "copy": true, 00:13:23.211 "nvme_iov_md": false 00:13:23.211 }, 00:13:23.211 "memory_domains": [ 00:13:23.211 { 00:13:23.211 "dma_device_id": "system", 00:13:23.211 "dma_device_type": 1 00:13:23.211 }, 00:13:23.211 { 00:13:23.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.211 "dma_device_type": 2 00:13:23.211 } 00:13:23.211 ], 00:13:23.211 "driver_specific": {} 00:13:23.211 } 00:13:23.211 ] 00:13:23.211 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.211 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:23.211 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.470 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.471 "name": "Existed_Raid", 00:13:23.471 "uuid": "40e776fd-4b2d-44aa-a63e-e57dd73baecc", 00:13:23.471 "strip_size_kb": 0, 00:13:23.471 "state": "configuring", 00:13:23.471 "raid_level": "raid1", 00:13:23.471 "superblock": true, 00:13:23.471 "num_base_bdevs": 4, 00:13:23.471 "num_base_bdevs_discovered": 3, 00:13:23.471 "num_base_bdevs_operational": 4, 00:13:23.471 "base_bdevs_list": [ 00:13:23.471 { 00:13:23.471 "name": "BaseBdev1", 00:13:23.471 "uuid": "ce9a71aa-7ab1-4f5e-ae23-618522a0f804", 00:13:23.471 "is_configured": true, 00:13:23.471 "data_offset": 2048, 00:13:23.471 "data_size": 63488 00:13:23.471 }, 00:13:23.471 { 00:13:23.471 "name": "BaseBdev2", 00:13:23.471 "uuid": "ca42f05e-35e2-4a67-b49a-c8ec2bf8d960", 00:13:23.471 "is_configured": true, 00:13:23.471 "data_offset": 2048, 00:13:23.471 "data_size": 63488 00:13:23.471 }, 00:13:23.471 { 00:13:23.471 "name": "BaseBdev3", 00:13:23.471 "uuid": "6612e193-89e0-402a-8e96-ad8dae34d594", 00:13:23.471 "is_configured": true, 00:13:23.471 "data_offset": 2048, 00:13:23.471 "data_size": 63488 00:13:23.471 }, 00:13:23.471 { 00:13:23.471 "name": "BaseBdev4", 00:13:23.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.471 "is_configured": false, 00:13:23.471 "data_offset": 0, 00:13:23.471 "data_size": 0 00:13:23.471 } 00:13:23.471 ] 00:13:23.471 }' 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.471 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.730 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:23.730 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.730 11:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.990 [2024-11-05 11:29:23.016152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:23.990 [2024-11-05 11:29:23.016603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:23.990 [2024-11-05 11:29:23.016658] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:23.990 [2024-11-05 11:29:23.016966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:23.990 [2024-11-05 11:29:23.017221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:23.990 [2024-11-05 11:29:23.017280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:23.990 BaseBdev4 00:13:23.990 [2024-11-05 11:29:23.017511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.990 [ 00:13:23.990 { 00:13:23.990 "name": "BaseBdev4", 00:13:23.990 "aliases": [ 00:13:23.990 "6e3397f3-ab3b-40b8-a1aa-a74d9b4195f5" 00:13:23.990 ], 00:13:23.990 "product_name": "Malloc disk", 00:13:23.990 "block_size": 512, 00:13:23.990 "num_blocks": 65536, 00:13:23.990 "uuid": "6e3397f3-ab3b-40b8-a1aa-a74d9b4195f5", 00:13:23.990 "assigned_rate_limits": { 00:13:23.990 "rw_ios_per_sec": 0, 00:13:23.990 "rw_mbytes_per_sec": 0, 00:13:23.990 "r_mbytes_per_sec": 0, 00:13:23.990 "w_mbytes_per_sec": 0 00:13:23.990 }, 00:13:23.990 "claimed": true, 00:13:23.990 "claim_type": "exclusive_write", 00:13:23.990 "zoned": false, 00:13:23.990 "supported_io_types": { 00:13:23.990 "read": true, 00:13:23.990 "write": true, 00:13:23.990 "unmap": true, 00:13:23.990 "flush": true, 00:13:23.990 "reset": true, 00:13:23.990 "nvme_admin": false, 00:13:23.990 "nvme_io": false, 00:13:23.990 "nvme_io_md": false, 00:13:23.990 "write_zeroes": true, 00:13:23.990 "zcopy": true, 00:13:23.990 "get_zone_info": false, 00:13:23.990 "zone_management": false, 00:13:23.990 "zone_append": false, 00:13:23.990 "compare": false, 00:13:23.990 "compare_and_write": false, 00:13:23.990 "abort": true, 00:13:23.990 "seek_hole": false, 00:13:23.990 "seek_data": false, 00:13:23.990 "copy": true, 00:13:23.990 "nvme_iov_md": false 00:13:23.990 }, 00:13:23.990 "memory_domains": [ 00:13:23.990 { 00:13:23.990 "dma_device_id": "system", 00:13:23.990 "dma_device_type": 1 00:13:23.990 }, 00:13:23.990 { 00:13:23.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.990 "dma_device_type": 2 00:13:23.990 } 00:13:23.990 ], 00:13:23.990 "driver_specific": {} 00:13:23.990 } 00:13:23.990 ] 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.990 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.991 "name": "Existed_Raid", 00:13:23.991 "uuid": "40e776fd-4b2d-44aa-a63e-e57dd73baecc", 00:13:23.991 "strip_size_kb": 0, 00:13:23.991 "state": "online", 00:13:23.991 "raid_level": "raid1", 00:13:23.991 "superblock": true, 00:13:23.991 "num_base_bdevs": 4, 00:13:23.991 "num_base_bdevs_discovered": 4, 00:13:23.991 "num_base_bdevs_operational": 4, 00:13:23.991 "base_bdevs_list": [ 00:13:23.991 { 00:13:23.991 "name": "BaseBdev1", 00:13:23.991 "uuid": "ce9a71aa-7ab1-4f5e-ae23-618522a0f804", 00:13:23.991 "is_configured": true, 00:13:23.991 "data_offset": 2048, 00:13:23.991 "data_size": 63488 00:13:23.991 }, 00:13:23.991 { 00:13:23.991 "name": "BaseBdev2", 00:13:23.991 "uuid": "ca42f05e-35e2-4a67-b49a-c8ec2bf8d960", 00:13:23.991 "is_configured": true, 00:13:23.991 "data_offset": 2048, 00:13:23.991 "data_size": 63488 00:13:23.991 }, 00:13:23.991 { 00:13:23.991 "name": "BaseBdev3", 00:13:23.991 "uuid": "6612e193-89e0-402a-8e96-ad8dae34d594", 00:13:23.991 "is_configured": true, 00:13:23.991 "data_offset": 2048, 00:13:23.991 "data_size": 63488 00:13:23.991 }, 00:13:23.991 { 00:13:23.991 "name": "BaseBdev4", 00:13:23.991 "uuid": "6e3397f3-ab3b-40b8-a1aa-a74d9b4195f5", 00:13:23.991 "is_configured": true, 00:13:23.991 "data_offset": 2048, 00:13:23.991 "data_size": 63488 00:13:23.991 } 00:13:23.991 ] 00:13:23.991 }' 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.991 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.251 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:24.251 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:24.251 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:24.251 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:24.251 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:24.251 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:24.251 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:24.251 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:24.251 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.251 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.251 [2024-11-05 11:29:23.467825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.251 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.251 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:24.251 "name": "Existed_Raid", 00:13:24.251 "aliases": [ 00:13:24.251 "40e776fd-4b2d-44aa-a63e-e57dd73baecc" 00:13:24.251 ], 00:13:24.251 "product_name": "Raid Volume", 00:13:24.251 "block_size": 512, 00:13:24.251 "num_blocks": 63488, 00:13:24.251 "uuid": "40e776fd-4b2d-44aa-a63e-e57dd73baecc", 00:13:24.251 "assigned_rate_limits": { 00:13:24.251 "rw_ios_per_sec": 0, 00:13:24.251 "rw_mbytes_per_sec": 0, 00:13:24.251 "r_mbytes_per_sec": 0, 00:13:24.251 "w_mbytes_per_sec": 0 00:13:24.251 }, 00:13:24.251 "claimed": false, 00:13:24.251 "zoned": false, 00:13:24.251 "supported_io_types": { 00:13:24.251 "read": true, 00:13:24.251 "write": true, 00:13:24.251 "unmap": false, 00:13:24.251 "flush": false, 00:13:24.251 "reset": true, 00:13:24.251 "nvme_admin": false, 00:13:24.251 "nvme_io": false, 00:13:24.251 "nvme_io_md": false, 00:13:24.251 "write_zeroes": true, 00:13:24.251 "zcopy": false, 00:13:24.251 "get_zone_info": false, 00:13:24.251 "zone_management": false, 00:13:24.251 "zone_append": false, 00:13:24.251 "compare": false, 00:13:24.251 "compare_and_write": false, 00:13:24.251 "abort": false, 00:13:24.251 "seek_hole": false, 00:13:24.251 "seek_data": false, 00:13:24.251 "copy": false, 00:13:24.251 "nvme_iov_md": false 00:13:24.251 }, 00:13:24.251 "memory_domains": [ 00:13:24.251 { 00:13:24.251 "dma_device_id": "system", 00:13:24.251 "dma_device_type": 1 00:13:24.251 }, 00:13:24.251 { 00:13:24.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.251 "dma_device_type": 2 00:13:24.251 }, 00:13:24.251 { 00:13:24.251 "dma_device_id": "system", 00:13:24.251 "dma_device_type": 1 00:13:24.251 }, 00:13:24.251 { 00:13:24.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.251 "dma_device_type": 2 00:13:24.251 }, 00:13:24.251 { 00:13:24.251 "dma_device_id": "system", 00:13:24.251 "dma_device_type": 1 00:13:24.251 }, 00:13:24.251 { 00:13:24.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.251 "dma_device_type": 2 00:13:24.251 }, 00:13:24.251 { 00:13:24.251 "dma_device_id": "system", 00:13:24.251 "dma_device_type": 1 00:13:24.251 }, 00:13:24.251 { 00:13:24.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.251 "dma_device_type": 2 00:13:24.251 } 00:13:24.251 ], 00:13:24.251 "driver_specific": { 00:13:24.251 "raid": { 00:13:24.251 "uuid": "40e776fd-4b2d-44aa-a63e-e57dd73baecc", 00:13:24.251 "strip_size_kb": 0, 00:13:24.251 "state": "online", 00:13:24.251 "raid_level": "raid1", 00:13:24.251 "superblock": true, 00:13:24.251 "num_base_bdevs": 4, 00:13:24.251 "num_base_bdevs_discovered": 4, 00:13:24.251 "num_base_bdevs_operational": 4, 00:13:24.251 "base_bdevs_list": [ 00:13:24.251 { 00:13:24.251 "name": "BaseBdev1", 00:13:24.251 "uuid": "ce9a71aa-7ab1-4f5e-ae23-618522a0f804", 00:13:24.251 "is_configured": true, 00:13:24.251 "data_offset": 2048, 00:13:24.251 "data_size": 63488 00:13:24.251 }, 00:13:24.251 { 00:13:24.251 "name": "BaseBdev2", 00:13:24.251 "uuid": "ca42f05e-35e2-4a67-b49a-c8ec2bf8d960", 00:13:24.251 "is_configured": true, 00:13:24.251 "data_offset": 2048, 00:13:24.251 "data_size": 63488 00:13:24.251 }, 00:13:24.251 { 00:13:24.251 "name": "BaseBdev3", 00:13:24.251 "uuid": "6612e193-89e0-402a-8e96-ad8dae34d594", 00:13:24.251 "is_configured": true, 00:13:24.251 "data_offset": 2048, 00:13:24.251 "data_size": 63488 00:13:24.251 }, 00:13:24.251 { 00:13:24.251 "name": "BaseBdev4", 00:13:24.251 "uuid": "6e3397f3-ab3b-40b8-a1aa-a74d9b4195f5", 00:13:24.251 "is_configured": true, 00:13:24.251 "data_offset": 2048, 00:13:24.251 "data_size": 63488 00:13:24.251 } 00:13:24.251 ] 00:13:24.251 } 00:13:24.251 } 00:13:24.251 }' 00:13:24.251 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:24.511 BaseBdev2 00:13:24.511 BaseBdev3 00:13:24.511 BaseBdev4' 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.511 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.511 [2024-11-05 11:29:23.759094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.771 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.771 "name": "Existed_Raid", 00:13:24.771 "uuid": "40e776fd-4b2d-44aa-a63e-e57dd73baecc", 00:13:24.772 "strip_size_kb": 0, 00:13:24.772 "state": "online", 00:13:24.772 "raid_level": "raid1", 00:13:24.772 "superblock": true, 00:13:24.772 "num_base_bdevs": 4, 00:13:24.772 "num_base_bdevs_discovered": 3, 00:13:24.772 "num_base_bdevs_operational": 3, 00:13:24.772 "base_bdevs_list": [ 00:13:24.772 { 00:13:24.772 "name": null, 00:13:24.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.772 "is_configured": false, 00:13:24.772 "data_offset": 0, 00:13:24.772 "data_size": 63488 00:13:24.772 }, 00:13:24.772 { 00:13:24.772 "name": "BaseBdev2", 00:13:24.772 "uuid": "ca42f05e-35e2-4a67-b49a-c8ec2bf8d960", 00:13:24.772 "is_configured": true, 00:13:24.772 "data_offset": 2048, 00:13:24.772 "data_size": 63488 00:13:24.772 }, 00:13:24.772 { 00:13:24.772 "name": "BaseBdev3", 00:13:24.772 "uuid": "6612e193-89e0-402a-8e96-ad8dae34d594", 00:13:24.772 "is_configured": true, 00:13:24.772 "data_offset": 2048, 00:13:24.772 "data_size": 63488 00:13:24.772 }, 00:13:24.772 { 00:13:24.772 "name": "BaseBdev4", 00:13:24.772 "uuid": "6e3397f3-ab3b-40b8-a1aa-a74d9b4195f5", 00:13:24.772 "is_configured": true, 00:13:24.772 "data_offset": 2048, 00:13:24.772 "data_size": 63488 00:13:24.772 } 00:13:24.772 ] 00:13:24.772 }' 00:13:24.772 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.772 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.374 [2024-11-05 11:29:24.360986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.374 [2024-11-05 11:29:24.516746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.374 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.651 [2024-11-05 11:29:24.663020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:25.651 [2024-11-05 11:29:24.663142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.651 [2024-11-05 11:29:24.756170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.651 [2024-11-05 11:29:24.756237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.651 [2024-11-05 11:29:24.756249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.651 BaseBdev2 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.651 [ 00:13:25.651 { 00:13:25.651 "name": "BaseBdev2", 00:13:25.651 "aliases": [ 00:13:25.651 "286647bd-c3fb-44d7-b982-74cdd2798a7a" 00:13:25.651 ], 00:13:25.651 "product_name": "Malloc disk", 00:13:25.651 "block_size": 512, 00:13:25.651 "num_blocks": 65536, 00:13:25.651 "uuid": "286647bd-c3fb-44d7-b982-74cdd2798a7a", 00:13:25.651 "assigned_rate_limits": { 00:13:25.651 "rw_ios_per_sec": 0, 00:13:25.651 "rw_mbytes_per_sec": 0, 00:13:25.651 "r_mbytes_per_sec": 0, 00:13:25.651 "w_mbytes_per_sec": 0 00:13:25.651 }, 00:13:25.651 "claimed": false, 00:13:25.651 "zoned": false, 00:13:25.651 "supported_io_types": { 00:13:25.651 "read": true, 00:13:25.651 "write": true, 00:13:25.651 "unmap": true, 00:13:25.651 "flush": true, 00:13:25.651 "reset": true, 00:13:25.651 "nvme_admin": false, 00:13:25.651 "nvme_io": false, 00:13:25.651 "nvme_io_md": false, 00:13:25.651 "write_zeroes": true, 00:13:25.651 "zcopy": true, 00:13:25.651 "get_zone_info": false, 00:13:25.651 "zone_management": false, 00:13:25.651 "zone_append": false, 00:13:25.651 "compare": false, 00:13:25.651 "compare_and_write": false, 00:13:25.651 "abort": true, 00:13:25.651 "seek_hole": false, 00:13:25.651 "seek_data": false, 00:13:25.651 "copy": true, 00:13:25.651 "nvme_iov_md": false 00:13:25.651 }, 00:13:25.651 "memory_domains": [ 00:13:25.651 { 00:13:25.651 "dma_device_id": "system", 00:13:25.651 "dma_device_type": 1 00:13:25.651 }, 00:13:25.651 { 00:13:25.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.651 "dma_device_type": 2 00:13:25.651 } 00:13:25.651 ], 00:13:25.651 "driver_specific": {} 00:13:25.651 } 00:13:25.651 ] 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.651 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.912 BaseBdev3 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.912 [ 00:13:25.912 { 00:13:25.912 "name": "BaseBdev3", 00:13:25.912 "aliases": [ 00:13:25.912 "609e832b-adb2-4a52-a671-bc7a02b80f62" 00:13:25.912 ], 00:13:25.912 "product_name": "Malloc disk", 00:13:25.912 "block_size": 512, 00:13:25.912 "num_blocks": 65536, 00:13:25.912 "uuid": "609e832b-adb2-4a52-a671-bc7a02b80f62", 00:13:25.912 "assigned_rate_limits": { 00:13:25.912 "rw_ios_per_sec": 0, 00:13:25.912 "rw_mbytes_per_sec": 0, 00:13:25.912 "r_mbytes_per_sec": 0, 00:13:25.912 "w_mbytes_per_sec": 0 00:13:25.912 }, 00:13:25.912 "claimed": false, 00:13:25.912 "zoned": false, 00:13:25.912 "supported_io_types": { 00:13:25.912 "read": true, 00:13:25.912 "write": true, 00:13:25.912 "unmap": true, 00:13:25.912 "flush": true, 00:13:25.912 "reset": true, 00:13:25.912 "nvme_admin": false, 00:13:25.912 "nvme_io": false, 00:13:25.912 "nvme_io_md": false, 00:13:25.912 "write_zeroes": true, 00:13:25.912 "zcopy": true, 00:13:25.912 "get_zone_info": false, 00:13:25.912 "zone_management": false, 00:13:25.912 "zone_append": false, 00:13:25.912 "compare": false, 00:13:25.912 "compare_and_write": false, 00:13:25.912 "abort": true, 00:13:25.912 "seek_hole": false, 00:13:25.912 "seek_data": false, 00:13:25.912 "copy": true, 00:13:25.912 "nvme_iov_md": false 00:13:25.912 }, 00:13:25.912 "memory_domains": [ 00:13:25.912 { 00:13:25.912 "dma_device_id": "system", 00:13:25.912 "dma_device_type": 1 00:13:25.912 }, 00:13:25.912 { 00:13:25.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.912 "dma_device_type": 2 00:13:25.912 } 00:13:25.912 ], 00:13:25.912 "driver_specific": {} 00:13:25.912 } 00:13:25.912 ] 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.912 11:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.912 BaseBdev4 00:13:25.912 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.912 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:25.912 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:25.912 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:25.912 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:25.912 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:25.912 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:25.912 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.913 [ 00:13:25.913 { 00:13:25.913 "name": "BaseBdev4", 00:13:25.913 "aliases": [ 00:13:25.913 "c94aec96-daf6-439a-b0aa-ab813b9e610e" 00:13:25.913 ], 00:13:25.913 "product_name": "Malloc disk", 00:13:25.913 "block_size": 512, 00:13:25.913 "num_blocks": 65536, 00:13:25.913 "uuid": "c94aec96-daf6-439a-b0aa-ab813b9e610e", 00:13:25.913 "assigned_rate_limits": { 00:13:25.913 "rw_ios_per_sec": 0, 00:13:25.913 "rw_mbytes_per_sec": 0, 00:13:25.913 "r_mbytes_per_sec": 0, 00:13:25.913 "w_mbytes_per_sec": 0 00:13:25.913 }, 00:13:25.913 "claimed": false, 00:13:25.913 "zoned": false, 00:13:25.913 "supported_io_types": { 00:13:25.913 "read": true, 00:13:25.913 "write": true, 00:13:25.913 "unmap": true, 00:13:25.913 "flush": true, 00:13:25.913 "reset": true, 00:13:25.913 "nvme_admin": false, 00:13:25.913 "nvme_io": false, 00:13:25.913 "nvme_io_md": false, 00:13:25.913 "write_zeroes": true, 00:13:25.913 "zcopy": true, 00:13:25.913 "get_zone_info": false, 00:13:25.913 "zone_management": false, 00:13:25.913 "zone_append": false, 00:13:25.913 "compare": false, 00:13:25.913 "compare_and_write": false, 00:13:25.913 "abort": true, 00:13:25.913 "seek_hole": false, 00:13:25.913 "seek_data": false, 00:13:25.913 "copy": true, 00:13:25.913 "nvme_iov_md": false 00:13:25.913 }, 00:13:25.913 "memory_domains": [ 00:13:25.913 { 00:13:25.913 "dma_device_id": "system", 00:13:25.913 "dma_device_type": 1 00:13:25.913 }, 00:13:25.913 { 00:13:25.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.913 "dma_device_type": 2 00:13:25.913 } 00:13:25.913 ], 00:13:25.913 "driver_specific": {} 00:13:25.913 } 00:13:25.913 ] 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.913 [2024-11-05 11:29:25.063292] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:25.913 [2024-11-05 11:29:25.063344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:25.913 [2024-11-05 11:29:25.063368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:25.913 [2024-11-05 11:29:25.065226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.913 [2024-11-05 11:29:25.065292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.913 "name": "Existed_Raid", 00:13:25.913 "uuid": "b06852a2-5e33-43c4-a7c6-1687cc2d1652", 00:13:25.913 "strip_size_kb": 0, 00:13:25.913 "state": "configuring", 00:13:25.913 "raid_level": "raid1", 00:13:25.913 "superblock": true, 00:13:25.913 "num_base_bdevs": 4, 00:13:25.913 "num_base_bdevs_discovered": 3, 00:13:25.913 "num_base_bdevs_operational": 4, 00:13:25.913 "base_bdevs_list": [ 00:13:25.913 { 00:13:25.913 "name": "BaseBdev1", 00:13:25.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.913 "is_configured": false, 00:13:25.913 "data_offset": 0, 00:13:25.913 "data_size": 0 00:13:25.913 }, 00:13:25.913 { 00:13:25.913 "name": "BaseBdev2", 00:13:25.913 "uuid": "286647bd-c3fb-44d7-b982-74cdd2798a7a", 00:13:25.913 "is_configured": true, 00:13:25.913 "data_offset": 2048, 00:13:25.913 "data_size": 63488 00:13:25.913 }, 00:13:25.913 { 00:13:25.913 "name": "BaseBdev3", 00:13:25.913 "uuid": "609e832b-adb2-4a52-a671-bc7a02b80f62", 00:13:25.913 "is_configured": true, 00:13:25.913 "data_offset": 2048, 00:13:25.913 "data_size": 63488 00:13:25.913 }, 00:13:25.913 { 00:13:25.913 "name": "BaseBdev4", 00:13:25.913 "uuid": "c94aec96-daf6-439a-b0aa-ab813b9e610e", 00:13:25.913 "is_configured": true, 00:13:25.913 "data_offset": 2048, 00:13:25.913 "data_size": 63488 00:13:25.913 } 00:13:25.913 ] 00:13:25.913 }' 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.913 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.482 [2024-11-05 11:29:25.562409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.482 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.482 "name": "Existed_Raid", 00:13:26.482 "uuid": "b06852a2-5e33-43c4-a7c6-1687cc2d1652", 00:13:26.482 "strip_size_kb": 0, 00:13:26.482 "state": "configuring", 00:13:26.482 "raid_level": "raid1", 00:13:26.482 "superblock": true, 00:13:26.483 "num_base_bdevs": 4, 00:13:26.483 "num_base_bdevs_discovered": 2, 00:13:26.483 "num_base_bdevs_operational": 4, 00:13:26.483 "base_bdevs_list": [ 00:13:26.483 { 00:13:26.483 "name": "BaseBdev1", 00:13:26.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.483 "is_configured": false, 00:13:26.483 "data_offset": 0, 00:13:26.483 "data_size": 0 00:13:26.483 }, 00:13:26.483 { 00:13:26.483 "name": null, 00:13:26.483 "uuid": "286647bd-c3fb-44d7-b982-74cdd2798a7a", 00:13:26.483 "is_configured": false, 00:13:26.483 "data_offset": 0, 00:13:26.483 "data_size": 63488 00:13:26.483 }, 00:13:26.483 { 00:13:26.483 "name": "BaseBdev3", 00:13:26.483 "uuid": "609e832b-adb2-4a52-a671-bc7a02b80f62", 00:13:26.483 "is_configured": true, 00:13:26.483 "data_offset": 2048, 00:13:26.483 "data_size": 63488 00:13:26.483 }, 00:13:26.483 { 00:13:26.483 "name": "BaseBdev4", 00:13:26.483 "uuid": "c94aec96-daf6-439a-b0aa-ab813b9e610e", 00:13:26.483 "is_configured": true, 00:13:26.483 "data_offset": 2048, 00:13:26.483 "data_size": 63488 00:13:26.483 } 00:13:26.483 ] 00:13:26.483 }' 00:13:26.483 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.483 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.052 [2024-11-05 11:29:26.165547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.052 BaseBdev1 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.052 [ 00:13:27.052 { 00:13:27.052 "name": "BaseBdev1", 00:13:27.052 "aliases": [ 00:13:27.052 "f534d91b-be3e-4312-b02b-3eb3b7d1db8b" 00:13:27.052 ], 00:13:27.052 "product_name": "Malloc disk", 00:13:27.052 "block_size": 512, 00:13:27.052 "num_blocks": 65536, 00:13:27.052 "uuid": "f534d91b-be3e-4312-b02b-3eb3b7d1db8b", 00:13:27.052 "assigned_rate_limits": { 00:13:27.052 "rw_ios_per_sec": 0, 00:13:27.052 "rw_mbytes_per_sec": 0, 00:13:27.052 "r_mbytes_per_sec": 0, 00:13:27.052 "w_mbytes_per_sec": 0 00:13:27.052 }, 00:13:27.052 "claimed": true, 00:13:27.052 "claim_type": "exclusive_write", 00:13:27.052 "zoned": false, 00:13:27.052 "supported_io_types": { 00:13:27.052 "read": true, 00:13:27.052 "write": true, 00:13:27.052 "unmap": true, 00:13:27.052 "flush": true, 00:13:27.052 "reset": true, 00:13:27.052 "nvme_admin": false, 00:13:27.052 "nvme_io": false, 00:13:27.052 "nvme_io_md": false, 00:13:27.052 "write_zeroes": true, 00:13:27.052 "zcopy": true, 00:13:27.052 "get_zone_info": false, 00:13:27.052 "zone_management": false, 00:13:27.052 "zone_append": false, 00:13:27.052 "compare": false, 00:13:27.052 "compare_and_write": false, 00:13:27.052 "abort": true, 00:13:27.052 "seek_hole": false, 00:13:27.052 "seek_data": false, 00:13:27.052 "copy": true, 00:13:27.052 "nvme_iov_md": false 00:13:27.052 }, 00:13:27.052 "memory_domains": [ 00:13:27.052 { 00:13:27.052 "dma_device_id": "system", 00:13:27.052 "dma_device_type": 1 00:13:27.052 }, 00:13:27.052 { 00:13:27.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.052 "dma_device_type": 2 00:13:27.052 } 00:13:27.052 ], 00:13:27.052 "driver_specific": {} 00:13:27.052 } 00:13:27.052 ] 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.052 "name": "Existed_Raid", 00:13:27.052 "uuid": "b06852a2-5e33-43c4-a7c6-1687cc2d1652", 00:13:27.052 "strip_size_kb": 0, 00:13:27.052 "state": "configuring", 00:13:27.052 "raid_level": "raid1", 00:13:27.052 "superblock": true, 00:13:27.052 "num_base_bdevs": 4, 00:13:27.052 "num_base_bdevs_discovered": 3, 00:13:27.052 "num_base_bdevs_operational": 4, 00:13:27.052 "base_bdevs_list": [ 00:13:27.052 { 00:13:27.052 "name": "BaseBdev1", 00:13:27.052 "uuid": "f534d91b-be3e-4312-b02b-3eb3b7d1db8b", 00:13:27.052 "is_configured": true, 00:13:27.052 "data_offset": 2048, 00:13:27.052 "data_size": 63488 00:13:27.052 }, 00:13:27.052 { 00:13:27.052 "name": null, 00:13:27.052 "uuid": "286647bd-c3fb-44d7-b982-74cdd2798a7a", 00:13:27.052 "is_configured": false, 00:13:27.052 "data_offset": 0, 00:13:27.052 "data_size": 63488 00:13:27.052 }, 00:13:27.052 { 00:13:27.052 "name": "BaseBdev3", 00:13:27.052 "uuid": "609e832b-adb2-4a52-a671-bc7a02b80f62", 00:13:27.052 "is_configured": true, 00:13:27.052 "data_offset": 2048, 00:13:27.052 "data_size": 63488 00:13:27.052 }, 00:13:27.052 { 00:13:27.052 "name": "BaseBdev4", 00:13:27.052 "uuid": "c94aec96-daf6-439a-b0aa-ab813b9e610e", 00:13:27.052 "is_configured": true, 00:13:27.052 "data_offset": 2048, 00:13:27.052 "data_size": 63488 00:13:27.052 } 00:13:27.052 ] 00:13:27.052 }' 00:13:27.052 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.053 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.622 [2024-11-05 11:29:26.744644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.622 "name": "Existed_Raid", 00:13:27.622 "uuid": "b06852a2-5e33-43c4-a7c6-1687cc2d1652", 00:13:27.622 "strip_size_kb": 0, 00:13:27.622 "state": "configuring", 00:13:27.622 "raid_level": "raid1", 00:13:27.622 "superblock": true, 00:13:27.622 "num_base_bdevs": 4, 00:13:27.622 "num_base_bdevs_discovered": 2, 00:13:27.622 "num_base_bdevs_operational": 4, 00:13:27.622 "base_bdevs_list": [ 00:13:27.622 { 00:13:27.622 "name": "BaseBdev1", 00:13:27.622 "uuid": "f534d91b-be3e-4312-b02b-3eb3b7d1db8b", 00:13:27.622 "is_configured": true, 00:13:27.622 "data_offset": 2048, 00:13:27.622 "data_size": 63488 00:13:27.622 }, 00:13:27.622 { 00:13:27.622 "name": null, 00:13:27.622 "uuid": "286647bd-c3fb-44d7-b982-74cdd2798a7a", 00:13:27.622 "is_configured": false, 00:13:27.622 "data_offset": 0, 00:13:27.622 "data_size": 63488 00:13:27.622 }, 00:13:27.622 { 00:13:27.622 "name": null, 00:13:27.622 "uuid": "609e832b-adb2-4a52-a671-bc7a02b80f62", 00:13:27.622 "is_configured": false, 00:13:27.622 "data_offset": 0, 00:13:27.622 "data_size": 63488 00:13:27.622 }, 00:13:27.622 { 00:13:27.622 "name": "BaseBdev4", 00:13:27.622 "uuid": "c94aec96-daf6-439a-b0aa-ab813b9e610e", 00:13:27.622 "is_configured": true, 00:13:27.622 "data_offset": 2048, 00:13:27.622 "data_size": 63488 00:13:27.622 } 00:13:27.622 ] 00:13:27.622 }' 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.622 11:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.191 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.191 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.191 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.191 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:28.191 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.191 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:28.191 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:28.191 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.191 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.191 [2024-11-05 11:29:27.223827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:28.191 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.192 "name": "Existed_Raid", 00:13:28.192 "uuid": "b06852a2-5e33-43c4-a7c6-1687cc2d1652", 00:13:28.192 "strip_size_kb": 0, 00:13:28.192 "state": "configuring", 00:13:28.192 "raid_level": "raid1", 00:13:28.192 "superblock": true, 00:13:28.192 "num_base_bdevs": 4, 00:13:28.192 "num_base_bdevs_discovered": 3, 00:13:28.192 "num_base_bdevs_operational": 4, 00:13:28.192 "base_bdevs_list": [ 00:13:28.192 { 00:13:28.192 "name": "BaseBdev1", 00:13:28.192 "uuid": "f534d91b-be3e-4312-b02b-3eb3b7d1db8b", 00:13:28.192 "is_configured": true, 00:13:28.192 "data_offset": 2048, 00:13:28.192 "data_size": 63488 00:13:28.192 }, 00:13:28.192 { 00:13:28.192 "name": null, 00:13:28.192 "uuid": "286647bd-c3fb-44d7-b982-74cdd2798a7a", 00:13:28.192 "is_configured": false, 00:13:28.192 "data_offset": 0, 00:13:28.192 "data_size": 63488 00:13:28.192 }, 00:13:28.192 { 00:13:28.192 "name": "BaseBdev3", 00:13:28.192 "uuid": "609e832b-adb2-4a52-a671-bc7a02b80f62", 00:13:28.192 "is_configured": true, 00:13:28.192 "data_offset": 2048, 00:13:28.192 "data_size": 63488 00:13:28.192 }, 00:13:28.192 { 00:13:28.192 "name": "BaseBdev4", 00:13:28.192 "uuid": "c94aec96-daf6-439a-b0aa-ab813b9e610e", 00:13:28.192 "is_configured": true, 00:13:28.192 "data_offset": 2048, 00:13:28.192 "data_size": 63488 00:13:28.192 } 00:13:28.192 ] 00:13:28.192 }' 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.192 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.452 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.452 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.452 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.452 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:28.452 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.712 [2024-11-05 11:29:27.747215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.712 "name": "Existed_Raid", 00:13:28.712 "uuid": "b06852a2-5e33-43c4-a7c6-1687cc2d1652", 00:13:28.712 "strip_size_kb": 0, 00:13:28.712 "state": "configuring", 00:13:28.712 "raid_level": "raid1", 00:13:28.712 "superblock": true, 00:13:28.712 "num_base_bdevs": 4, 00:13:28.712 "num_base_bdevs_discovered": 2, 00:13:28.712 "num_base_bdevs_operational": 4, 00:13:28.712 "base_bdevs_list": [ 00:13:28.712 { 00:13:28.712 "name": null, 00:13:28.712 "uuid": "f534d91b-be3e-4312-b02b-3eb3b7d1db8b", 00:13:28.712 "is_configured": false, 00:13:28.712 "data_offset": 0, 00:13:28.712 "data_size": 63488 00:13:28.712 }, 00:13:28.712 { 00:13:28.712 "name": null, 00:13:28.712 "uuid": "286647bd-c3fb-44d7-b982-74cdd2798a7a", 00:13:28.712 "is_configured": false, 00:13:28.712 "data_offset": 0, 00:13:28.712 "data_size": 63488 00:13:28.712 }, 00:13:28.712 { 00:13:28.712 "name": "BaseBdev3", 00:13:28.712 "uuid": "609e832b-adb2-4a52-a671-bc7a02b80f62", 00:13:28.712 "is_configured": true, 00:13:28.712 "data_offset": 2048, 00:13:28.712 "data_size": 63488 00:13:28.712 }, 00:13:28.712 { 00:13:28.712 "name": "BaseBdev4", 00:13:28.712 "uuid": "c94aec96-daf6-439a-b0aa-ab813b9e610e", 00:13:28.712 "is_configured": true, 00:13:28.712 "data_offset": 2048, 00:13:28.712 "data_size": 63488 00:13:28.712 } 00:13:28.712 ] 00:13:28.712 }' 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.712 11:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.280 [2024-11-05 11:29:28.331172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.280 "name": "Existed_Raid", 00:13:29.280 "uuid": "b06852a2-5e33-43c4-a7c6-1687cc2d1652", 00:13:29.280 "strip_size_kb": 0, 00:13:29.280 "state": "configuring", 00:13:29.280 "raid_level": "raid1", 00:13:29.280 "superblock": true, 00:13:29.280 "num_base_bdevs": 4, 00:13:29.280 "num_base_bdevs_discovered": 3, 00:13:29.280 "num_base_bdevs_operational": 4, 00:13:29.280 "base_bdevs_list": [ 00:13:29.280 { 00:13:29.280 "name": null, 00:13:29.280 "uuid": "f534d91b-be3e-4312-b02b-3eb3b7d1db8b", 00:13:29.280 "is_configured": false, 00:13:29.280 "data_offset": 0, 00:13:29.280 "data_size": 63488 00:13:29.280 }, 00:13:29.280 { 00:13:29.280 "name": "BaseBdev2", 00:13:29.280 "uuid": "286647bd-c3fb-44d7-b982-74cdd2798a7a", 00:13:29.280 "is_configured": true, 00:13:29.280 "data_offset": 2048, 00:13:29.280 "data_size": 63488 00:13:29.280 }, 00:13:29.280 { 00:13:29.280 "name": "BaseBdev3", 00:13:29.280 "uuid": "609e832b-adb2-4a52-a671-bc7a02b80f62", 00:13:29.280 "is_configured": true, 00:13:29.280 "data_offset": 2048, 00:13:29.280 "data_size": 63488 00:13:29.280 }, 00:13:29.280 { 00:13:29.280 "name": "BaseBdev4", 00:13:29.280 "uuid": "c94aec96-daf6-439a-b0aa-ab813b9e610e", 00:13:29.280 "is_configured": true, 00:13:29.280 "data_offset": 2048, 00:13:29.280 "data_size": 63488 00:13:29.280 } 00:13:29.280 ] 00:13:29.280 }' 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.280 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.539 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.539 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:29.539 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.539 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.539 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f534d91b-be3e-4312-b02b-3eb3b7d1db8b 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.799 [2024-11-05 11:29:28.882319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:29.799 [2024-11-05 11:29:28.882534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:29.799 [2024-11-05 11:29:28.882550] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:29.799 NewBaseBdev 00:13:29.799 [2024-11-05 11:29:28.882833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:29.799 [2024-11-05 11:29:28.882995] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:29.799 [2024-11-05 11:29:28.883016] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:29.799 [2024-11-05 11:29:28.883189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.799 [ 00:13:29.799 { 00:13:29.799 "name": "NewBaseBdev", 00:13:29.799 "aliases": [ 00:13:29.799 "f534d91b-be3e-4312-b02b-3eb3b7d1db8b" 00:13:29.799 ], 00:13:29.799 "product_name": "Malloc disk", 00:13:29.799 "block_size": 512, 00:13:29.799 "num_blocks": 65536, 00:13:29.799 "uuid": "f534d91b-be3e-4312-b02b-3eb3b7d1db8b", 00:13:29.799 "assigned_rate_limits": { 00:13:29.799 "rw_ios_per_sec": 0, 00:13:29.799 "rw_mbytes_per_sec": 0, 00:13:29.799 "r_mbytes_per_sec": 0, 00:13:29.799 "w_mbytes_per_sec": 0 00:13:29.799 }, 00:13:29.799 "claimed": true, 00:13:29.799 "claim_type": "exclusive_write", 00:13:29.799 "zoned": false, 00:13:29.799 "supported_io_types": { 00:13:29.799 "read": true, 00:13:29.799 "write": true, 00:13:29.799 "unmap": true, 00:13:29.799 "flush": true, 00:13:29.799 "reset": true, 00:13:29.799 "nvme_admin": false, 00:13:29.799 "nvme_io": false, 00:13:29.799 "nvme_io_md": false, 00:13:29.799 "write_zeroes": true, 00:13:29.799 "zcopy": true, 00:13:29.799 "get_zone_info": false, 00:13:29.799 "zone_management": false, 00:13:29.799 "zone_append": false, 00:13:29.799 "compare": false, 00:13:29.799 "compare_and_write": false, 00:13:29.799 "abort": true, 00:13:29.799 "seek_hole": false, 00:13:29.799 "seek_data": false, 00:13:29.799 "copy": true, 00:13:29.799 "nvme_iov_md": false 00:13:29.799 }, 00:13:29.799 "memory_domains": [ 00:13:29.799 { 00:13:29.799 "dma_device_id": "system", 00:13:29.799 "dma_device_type": 1 00:13:29.799 }, 00:13:29.799 { 00:13:29.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.799 "dma_device_type": 2 00:13:29.799 } 00:13:29.799 ], 00:13:29.799 "driver_specific": {} 00:13:29.799 } 00:13:29.799 ] 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.799 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.800 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.800 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.800 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.800 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.800 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.800 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.800 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.800 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.800 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.800 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.800 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.800 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.800 "name": "Existed_Raid", 00:13:29.800 "uuid": "b06852a2-5e33-43c4-a7c6-1687cc2d1652", 00:13:29.800 "strip_size_kb": 0, 00:13:29.800 "state": "online", 00:13:29.800 "raid_level": "raid1", 00:13:29.800 "superblock": true, 00:13:29.800 "num_base_bdevs": 4, 00:13:29.800 "num_base_bdevs_discovered": 4, 00:13:29.800 "num_base_bdevs_operational": 4, 00:13:29.800 "base_bdevs_list": [ 00:13:29.800 { 00:13:29.800 "name": "NewBaseBdev", 00:13:29.800 "uuid": "f534d91b-be3e-4312-b02b-3eb3b7d1db8b", 00:13:29.800 "is_configured": true, 00:13:29.800 "data_offset": 2048, 00:13:29.800 "data_size": 63488 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "name": "BaseBdev2", 00:13:29.800 "uuid": "286647bd-c3fb-44d7-b982-74cdd2798a7a", 00:13:29.800 "is_configured": true, 00:13:29.800 "data_offset": 2048, 00:13:29.800 "data_size": 63488 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "name": "BaseBdev3", 00:13:29.800 "uuid": "609e832b-adb2-4a52-a671-bc7a02b80f62", 00:13:29.800 "is_configured": true, 00:13:29.800 "data_offset": 2048, 00:13:29.800 "data_size": 63488 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "name": "BaseBdev4", 00:13:29.800 "uuid": "c94aec96-daf6-439a-b0aa-ab813b9e610e", 00:13:29.800 "is_configured": true, 00:13:29.800 "data_offset": 2048, 00:13:29.800 "data_size": 63488 00:13:29.800 } 00:13:29.800 ] 00:13:29.800 }' 00:13:29.800 11:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.800 11:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.370 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:30.370 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:30.370 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:30.370 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:30.370 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:30.370 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:30.370 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:30.370 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:30.370 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.370 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.370 [2024-11-05 11:29:29.357914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.370 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.370 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:30.370 "name": "Existed_Raid", 00:13:30.370 "aliases": [ 00:13:30.370 "b06852a2-5e33-43c4-a7c6-1687cc2d1652" 00:13:30.370 ], 00:13:30.370 "product_name": "Raid Volume", 00:13:30.370 "block_size": 512, 00:13:30.370 "num_blocks": 63488, 00:13:30.370 "uuid": "b06852a2-5e33-43c4-a7c6-1687cc2d1652", 00:13:30.370 "assigned_rate_limits": { 00:13:30.370 "rw_ios_per_sec": 0, 00:13:30.370 "rw_mbytes_per_sec": 0, 00:13:30.370 "r_mbytes_per_sec": 0, 00:13:30.370 "w_mbytes_per_sec": 0 00:13:30.370 }, 00:13:30.370 "claimed": false, 00:13:30.370 "zoned": false, 00:13:30.370 "supported_io_types": { 00:13:30.370 "read": true, 00:13:30.370 "write": true, 00:13:30.370 "unmap": false, 00:13:30.370 "flush": false, 00:13:30.370 "reset": true, 00:13:30.370 "nvme_admin": false, 00:13:30.370 "nvme_io": false, 00:13:30.370 "nvme_io_md": false, 00:13:30.370 "write_zeroes": true, 00:13:30.370 "zcopy": false, 00:13:30.370 "get_zone_info": false, 00:13:30.370 "zone_management": false, 00:13:30.370 "zone_append": false, 00:13:30.370 "compare": false, 00:13:30.370 "compare_and_write": false, 00:13:30.370 "abort": false, 00:13:30.370 "seek_hole": false, 00:13:30.370 "seek_data": false, 00:13:30.370 "copy": false, 00:13:30.370 "nvme_iov_md": false 00:13:30.370 }, 00:13:30.370 "memory_domains": [ 00:13:30.370 { 00:13:30.370 "dma_device_id": "system", 00:13:30.370 "dma_device_type": 1 00:13:30.370 }, 00:13:30.370 { 00:13:30.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.370 "dma_device_type": 2 00:13:30.370 }, 00:13:30.370 { 00:13:30.370 "dma_device_id": "system", 00:13:30.370 "dma_device_type": 1 00:13:30.370 }, 00:13:30.370 { 00:13:30.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.370 "dma_device_type": 2 00:13:30.370 }, 00:13:30.370 { 00:13:30.370 "dma_device_id": "system", 00:13:30.370 "dma_device_type": 1 00:13:30.370 }, 00:13:30.370 { 00:13:30.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.370 "dma_device_type": 2 00:13:30.370 }, 00:13:30.370 { 00:13:30.370 "dma_device_id": "system", 00:13:30.370 "dma_device_type": 1 00:13:30.370 }, 00:13:30.370 { 00:13:30.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.370 "dma_device_type": 2 00:13:30.370 } 00:13:30.370 ], 00:13:30.370 "driver_specific": { 00:13:30.370 "raid": { 00:13:30.370 "uuid": "b06852a2-5e33-43c4-a7c6-1687cc2d1652", 00:13:30.370 "strip_size_kb": 0, 00:13:30.370 "state": "online", 00:13:30.370 "raid_level": "raid1", 00:13:30.370 "superblock": true, 00:13:30.370 "num_base_bdevs": 4, 00:13:30.370 "num_base_bdevs_discovered": 4, 00:13:30.370 "num_base_bdevs_operational": 4, 00:13:30.370 "base_bdevs_list": [ 00:13:30.370 { 00:13:30.370 "name": "NewBaseBdev", 00:13:30.370 "uuid": "f534d91b-be3e-4312-b02b-3eb3b7d1db8b", 00:13:30.370 "is_configured": true, 00:13:30.370 "data_offset": 2048, 00:13:30.370 "data_size": 63488 00:13:30.370 }, 00:13:30.370 { 00:13:30.370 "name": "BaseBdev2", 00:13:30.370 "uuid": "286647bd-c3fb-44d7-b982-74cdd2798a7a", 00:13:30.370 "is_configured": true, 00:13:30.370 "data_offset": 2048, 00:13:30.370 "data_size": 63488 00:13:30.370 }, 00:13:30.370 { 00:13:30.370 "name": "BaseBdev3", 00:13:30.370 "uuid": "609e832b-adb2-4a52-a671-bc7a02b80f62", 00:13:30.370 "is_configured": true, 00:13:30.370 "data_offset": 2048, 00:13:30.370 "data_size": 63488 00:13:30.370 }, 00:13:30.370 { 00:13:30.370 "name": "BaseBdev4", 00:13:30.370 "uuid": "c94aec96-daf6-439a-b0aa-ab813b9e610e", 00:13:30.370 "is_configured": true, 00:13:30.370 "data_offset": 2048, 00:13:30.370 "data_size": 63488 00:13:30.370 } 00:13:30.370 ] 00:13:30.370 } 00:13:30.370 } 00:13:30.370 }' 00:13:30.370 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:30.370 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:30.370 BaseBdev2 00:13:30.370 BaseBdev3 00:13:30.370 BaseBdev4' 00:13:30.370 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.371 [2024-11-05 11:29:29.609176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:30.371 [2024-11-05 11:29:29.609213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.371 [2024-11-05 11:29:29.609307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.371 [2024-11-05 11:29:29.609595] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.371 [2024-11-05 11:29:29.609616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73960 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 73960 ']' 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 73960 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:30.371 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73960 00:13:30.629 killing process with pid 73960 00:13:30.629 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:30.629 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:30.629 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73960' 00:13:30.629 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 73960 00:13:30.629 [2024-11-05 11:29:29.646442] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:30.629 11:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 73960 00:13:30.887 [2024-11-05 11:29:30.034718] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:32.268 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:32.268 00:13:32.268 real 0m11.656s 00:13:32.268 user 0m18.577s 00:13:32.268 sys 0m2.080s 00:13:32.268 11:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:32.268 11:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.268 ************************************ 00:13:32.268 END TEST raid_state_function_test_sb 00:13:32.268 ************************************ 00:13:32.268 11:29:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:32.268 11:29:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:32.268 11:29:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:32.268 11:29:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.268 ************************************ 00:13:32.268 START TEST raid_superblock_test 00:13:32.268 ************************************ 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74631 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74631 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74631 ']' 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:32.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:32.268 11:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.268 [2024-11-05 11:29:31.284811] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:13:32.268 [2024-11-05 11:29:31.284951] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74631 ] 00:13:32.268 [2024-11-05 11:29:31.452457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.528 [2024-11-05 11:29:31.572542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.528 [2024-11-05 11:29:31.778568] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.528 [2024-11-05 11:29:31.778628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.098 malloc1 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.098 [2024-11-05 11:29:32.161206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:33.098 [2024-11-05 11:29:32.161292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.098 [2024-11-05 11:29:32.161314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:33.098 [2024-11-05 11:29:32.161325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.098 [2024-11-05 11:29:32.163469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.098 [2024-11-05 11:29:32.163510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:33.098 pt1 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.098 malloc2 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.098 [2024-11-05 11:29:32.216407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:33.098 [2024-11-05 11:29:32.216468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.098 [2024-11-05 11:29:32.216488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:33.098 [2024-11-05 11:29:32.216498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.098 [2024-11-05 11:29:32.218546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.098 [2024-11-05 11:29:32.218581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:33.098 pt2 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.098 malloc3 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.098 [2024-11-05 11:29:32.281795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:33.098 [2024-11-05 11:29:32.281870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.098 [2024-11-05 11:29:32.281890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:33.098 [2024-11-05 11:29:32.281901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.098 [2024-11-05 11:29:32.283999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.098 [2024-11-05 11:29:32.284037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:33.098 pt3 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:33.098 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.099 malloc4 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.099 [2024-11-05 11:29:32.335775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:33.099 [2024-11-05 11:29:32.335842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.099 [2024-11-05 11:29:32.335865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:33.099 [2024-11-05 11:29:32.335875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.099 [2024-11-05 11:29:32.337969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.099 [2024-11-05 11:29:32.338010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:33.099 pt4 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.099 [2024-11-05 11:29:32.347797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:33.099 [2024-11-05 11:29:32.349700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:33.099 [2024-11-05 11:29:32.349767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:33.099 [2024-11-05 11:29:32.349808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:33.099 [2024-11-05 11:29:32.350001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:33.099 [2024-11-05 11:29:32.350028] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:33.099 [2024-11-05 11:29:32.350344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:33.099 [2024-11-05 11:29:32.350538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:33.099 [2024-11-05 11:29:32.350559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:33.099 [2024-11-05 11:29:32.350728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.099 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.359 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.359 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.359 "name": "raid_bdev1", 00:13:33.359 "uuid": "a39feabe-d4ee-44c6-a602-24f1989fd2cd", 00:13:33.359 "strip_size_kb": 0, 00:13:33.359 "state": "online", 00:13:33.359 "raid_level": "raid1", 00:13:33.359 "superblock": true, 00:13:33.359 "num_base_bdevs": 4, 00:13:33.359 "num_base_bdevs_discovered": 4, 00:13:33.359 "num_base_bdevs_operational": 4, 00:13:33.359 "base_bdevs_list": [ 00:13:33.359 { 00:13:33.359 "name": "pt1", 00:13:33.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:33.359 "is_configured": true, 00:13:33.359 "data_offset": 2048, 00:13:33.359 "data_size": 63488 00:13:33.359 }, 00:13:33.359 { 00:13:33.359 "name": "pt2", 00:13:33.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:33.359 "is_configured": true, 00:13:33.359 "data_offset": 2048, 00:13:33.359 "data_size": 63488 00:13:33.359 }, 00:13:33.359 { 00:13:33.359 "name": "pt3", 00:13:33.359 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:33.359 "is_configured": true, 00:13:33.359 "data_offset": 2048, 00:13:33.359 "data_size": 63488 00:13:33.359 }, 00:13:33.359 { 00:13:33.359 "name": "pt4", 00:13:33.359 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:33.359 "is_configured": true, 00:13:33.359 "data_offset": 2048, 00:13:33.359 "data_size": 63488 00:13:33.359 } 00:13:33.359 ] 00:13:33.359 }' 00:13:33.359 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.359 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.622 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:33.622 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:33.622 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:33.622 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:33.622 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:33.622 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:33.622 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:33.622 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:33.622 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.622 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.622 [2024-11-05 11:29:32.803394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.622 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.622 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:33.622 "name": "raid_bdev1", 00:13:33.622 "aliases": [ 00:13:33.622 "a39feabe-d4ee-44c6-a602-24f1989fd2cd" 00:13:33.622 ], 00:13:33.622 "product_name": "Raid Volume", 00:13:33.622 "block_size": 512, 00:13:33.622 "num_blocks": 63488, 00:13:33.622 "uuid": "a39feabe-d4ee-44c6-a602-24f1989fd2cd", 00:13:33.622 "assigned_rate_limits": { 00:13:33.622 "rw_ios_per_sec": 0, 00:13:33.622 "rw_mbytes_per_sec": 0, 00:13:33.622 "r_mbytes_per_sec": 0, 00:13:33.622 "w_mbytes_per_sec": 0 00:13:33.622 }, 00:13:33.622 "claimed": false, 00:13:33.622 "zoned": false, 00:13:33.622 "supported_io_types": { 00:13:33.622 "read": true, 00:13:33.622 "write": true, 00:13:33.622 "unmap": false, 00:13:33.622 "flush": false, 00:13:33.622 "reset": true, 00:13:33.622 "nvme_admin": false, 00:13:33.622 "nvme_io": false, 00:13:33.622 "nvme_io_md": false, 00:13:33.622 "write_zeroes": true, 00:13:33.622 "zcopy": false, 00:13:33.622 "get_zone_info": false, 00:13:33.622 "zone_management": false, 00:13:33.622 "zone_append": false, 00:13:33.622 "compare": false, 00:13:33.622 "compare_and_write": false, 00:13:33.622 "abort": false, 00:13:33.622 "seek_hole": false, 00:13:33.622 "seek_data": false, 00:13:33.622 "copy": false, 00:13:33.622 "nvme_iov_md": false 00:13:33.622 }, 00:13:33.622 "memory_domains": [ 00:13:33.622 { 00:13:33.622 "dma_device_id": "system", 00:13:33.622 "dma_device_type": 1 00:13:33.622 }, 00:13:33.622 { 00:13:33.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.622 "dma_device_type": 2 00:13:33.622 }, 00:13:33.622 { 00:13:33.622 "dma_device_id": "system", 00:13:33.622 "dma_device_type": 1 00:13:33.622 }, 00:13:33.622 { 00:13:33.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.622 "dma_device_type": 2 00:13:33.622 }, 00:13:33.622 { 00:13:33.622 "dma_device_id": "system", 00:13:33.622 "dma_device_type": 1 00:13:33.622 }, 00:13:33.622 { 00:13:33.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.622 "dma_device_type": 2 00:13:33.622 }, 00:13:33.622 { 00:13:33.622 "dma_device_id": "system", 00:13:33.622 "dma_device_type": 1 00:13:33.622 }, 00:13:33.622 { 00:13:33.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.622 "dma_device_type": 2 00:13:33.622 } 00:13:33.622 ], 00:13:33.622 "driver_specific": { 00:13:33.622 "raid": { 00:13:33.622 "uuid": "a39feabe-d4ee-44c6-a602-24f1989fd2cd", 00:13:33.622 "strip_size_kb": 0, 00:13:33.622 "state": "online", 00:13:33.622 "raid_level": "raid1", 00:13:33.622 "superblock": true, 00:13:33.622 "num_base_bdevs": 4, 00:13:33.622 "num_base_bdevs_discovered": 4, 00:13:33.622 "num_base_bdevs_operational": 4, 00:13:33.622 "base_bdevs_list": [ 00:13:33.622 { 00:13:33.622 "name": "pt1", 00:13:33.622 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:33.622 "is_configured": true, 00:13:33.622 "data_offset": 2048, 00:13:33.622 "data_size": 63488 00:13:33.622 }, 00:13:33.622 { 00:13:33.622 "name": "pt2", 00:13:33.622 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:33.623 "is_configured": true, 00:13:33.623 "data_offset": 2048, 00:13:33.623 "data_size": 63488 00:13:33.623 }, 00:13:33.623 { 00:13:33.623 "name": "pt3", 00:13:33.623 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:33.623 "is_configured": true, 00:13:33.623 "data_offset": 2048, 00:13:33.623 "data_size": 63488 00:13:33.623 }, 00:13:33.623 { 00:13:33.623 "name": "pt4", 00:13:33.623 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:33.623 "is_configured": true, 00:13:33.623 "data_offset": 2048, 00:13:33.623 "data_size": 63488 00:13:33.623 } 00:13:33.623 ] 00:13:33.623 } 00:13:33.623 } 00:13:33.623 }' 00:13:33.623 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:33.623 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:33.623 pt2 00:13:33.623 pt3 00:13:33.623 pt4' 00:13:33.623 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.882 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:33.882 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.882 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:33.882 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.882 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.882 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.882 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.882 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.882 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.882 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.882 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:33.882 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.882 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.882 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.882 11:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.882 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.882 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.882 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.882 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.883 [2024-11-05 11:29:33.106772] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a39feabe-d4ee-44c6-a602-24f1989fd2cd 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a39feabe-d4ee-44c6-a602-24f1989fd2cd ']' 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.883 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.883 [2024-11-05 11:29:33.154401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:33.883 [2024-11-05 11:29:33.154430] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:33.883 [2024-11-05 11:29:33.154524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.883 [2024-11-05 11:29:33.154612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.883 [2024-11-05 11:29:33.154645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.143 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.143 [2024-11-05 11:29:33.318165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:34.143 [2024-11-05 11:29:33.320324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:34.143 [2024-11-05 11:29:33.320451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:34.143 [2024-11-05 11:29:33.320491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:34.143 [2024-11-05 11:29:33.320548] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:34.143 [2024-11-05 11:29:33.320606] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:34.143 [2024-11-05 11:29:33.320626] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:34.143 [2024-11-05 11:29:33.320645] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:34.143 [2024-11-05 11:29:33.320658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:34.144 [2024-11-05 11:29:33.320670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:34.144 request: 00:13:34.144 { 00:13:34.144 "name": "raid_bdev1", 00:13:34.144 "raid_level": "raid1", 00:13:34.144 "base_bdevs": [ 00:13:34.144 "malloc1", 00:13:34.144 "malloc2", 00:13:34.144 "malloc3", 00:13:34.144 "malloc4" 00:13:34.144 ], 00:13:34.144 "superblock": false, 00:13:34.144 "method": "bdev_raid_create", 00:13:34.144 "req_id": 1 00:13:34.144 } 00:13:34.144 Got JSON-RPC error response 00:13:34.144 response: 00:13:34.144 { 00:13:34.144 "code": -17, 00:13:34.144 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:34.144 } 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.144 [2024-11-05 11:29:33.382023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:34.144 [2024-11-05 11:29:33.382158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.144 [2024-11-05 11:29:33.382196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:34.144 [2024-11-05 11:29:33.382245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.144 [2024-11-05 11:29:33.384565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.144 [2024-11-05 11:29:33.384662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:34.144 [2024-11-05 11:29:33.384784] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:34.144 [2024-11-05 11:29:33.384868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:34.144 pt1 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.144 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.403 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.403 "name": "raid_bdev1", 00:13:34.403 "uuid": "a39feabe-d4ee-44c6-a602-24f1989fd2cd", 00:13:34.403 "strip_size_kb": 0, 00:13:34.403 "state": "configuring", 00:13:34.403 "raid_level": "raid1", 00:13:34.403 "superblock": true, 00:13:34.403 "num_base_bdevs": 4, 00:13:34.403 "num_base_bdevs_discovered": 1, 00:13:34.403 "num_base_bdevs_operational": 4, 00:13:34.403 "base_bdevs_list": [ 00:13:34.403 { 00:13:34.403 "name": "pt1", 00:13:34.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.403 "is_configured": true, 00:13:34.403 "data_offset": 2048, 00:13:34.403 "data_size": 63488 00:13:34.403 }, 00:13:34.403 { 00:13:34.403 "name": null, 00:13:34.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.403 "is_configured": false, 00:13:34.403 "data_offset": 2048, 00:13:34.403 "data_size": 63488 00:13:34.403 }, 00:13:34.403 { 00:13:34.403 "name": null, 00:13:34.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.403 "is_configured": false, 00:13:34.403 "data_offset": 2048, 00:13:34.403 "data_size": 63488 00:13:34.403 }, 00:13:34.403 { 00:13:34.403 "name": null, 00:13:34.403 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:34.403 "is_configured": false, 00:13:34.403 "data_offset": 2048, 00:13:34.403 "data_size": 63488 00:13:34.403 } 00:13:34.403 ] 00:13:34.403 }' 00:13:34.403 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.403 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.662 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:34.662 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.663 [2024-11-05 11:29:33.857247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:34.663 [2024-11-05 11:29:33.857324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.663 [2024-11-05 11:29:33.857345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:34.663 [2024-11-05 11:29:33.857358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.663 [2024-11-05 11:29:33.857864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.663 [2024-11-05 11:29:33.857888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:34.663 [2024-11-05 11:29:33.857976] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:34.663 [2024-11-05 11:29:33.858011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:34.663 pt2 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.663 [2024-11-05 11:29:33.869220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.663 "name": "raid_bdev1", 00:13:34.663 "uuid": "a39feabe-d4ee-44c6-a602-24f1989fd2cd", 00:13:34.663 "strip_size_kb": 0, 00:13:34.663 "state": "configuring", 00:13:34.663 "raid_level": "raid1", 00:13:34.663 "superblock": true, 00:13:34.663 "num_base_bdevs": 4, 00:13:34.663 "num_base_bdevs_discovered": 1, 00:13:34.663 "num_base_bdevs_operational": 4, 00:13:34.663 "base_bdevs_list": [ 00:13:34.663 { 00:13:34.663 "name": "pt1", 00:13:34.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.663 "is_configured": true, 00:13:34.663 "data_offset": 2048, 00:13:34.663 "data_size": 63488 00:13:34.663 }, 00:13:34.663 { 00:13:34.663 "name": null, 00:13:34.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.663 "is_configured": false, 00:13:34.663 "data_offset": 0, 00:13:34.663 "data_size": 63488 00:13:34.663 }, 00:13:34.663 { 00:13:34.663 "name": null, 00:13:34.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.663 "is_configured": false, 00:13:34.663 "data_offset": 2048, 00:13:34.663 "data_size": 63488 00:13:34.663 }, 00:13:34.663 { 00:13:34.663 "name": null, 00:13:34.663 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:34.663 "is_configured": false, 00:13:34.663 "data_offset": 2048, 00:13:34.663 "data_size": 63488 00:13:34.663 } 00:13:34.663 ] 00:13:34.663 }' 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.663 11:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.232 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:35.232 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:35.232 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:35.232 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.232 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.232 [2024-11-05 11:29:34.324441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:35.232 [2024-11-05 11:29:34.324578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.232 [2024-11-05 11:29:34.324628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:35.232 [2024-11-05 11:29:34.324641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.232 [2024-11-05 11:29:34.325101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.233 [2024-11-05 11:29:34.325146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:35.233 [2024-11-05 11:29:34.325245] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:35.233 [2024-11-05 11:29:34.325267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:35.233 pt2 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.233 [2024-11-05 11:29:34.332394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:35.233 [2024-11-05 11:29:34.332462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.233 [2024-11-05 11:29:34.332482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:35.233 [2024-11-05 11:29:34.332491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.233 [2024-11-05 11:29:34.332898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.233 [2024-11-05 11:29:34.332928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:35.233 [2024-11-05 11:29:34.333011] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:35.233 [2024-11-05 11:29:34.333032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:35.233 pt3 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.233 [2024-11-05 11:29:34.344380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:35.233 [2024-11-05 11:29:34.344449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.233 [2024-11-05 11:29:34.344481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:35.233 [2024-11-05 11:29:34.344490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.233 [2024-11-05 11:29:34.344969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.233 [2024-11-05 11:29:34.344991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:35.233 [2024-11-05 11:29:34.345074] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:35.233 [2024-11-05 11:29:34.345101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:35.233 [2024-11-05 11:29:34.345279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:35.233 [2024-11-05 11:29:34.345289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:35.233 [2024-11-05 11:29:34.345531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:35.233 [2024-11-05 11:29:34.345688] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:35.233 [2024-11-05 11:29:34.345705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:35.233 [2024-11-05 11:29:34.345846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.233 pt4 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.233 "name": "raid_bdev1", 00:13:35.233 "uuid": "a39feabe-d4ee-44c6-a602-24f1989fd2cd", 00:13:35.233 "strip_size_kb": 0, 00:13:35.233 "state": "online", 00:13:35.233 "raid_level": "raid1", 00:13:35.233 "superblock": true, 00:13:35.233 "num_base_bdevs": 4, 00:13:35.233 "num_base_bdevs_discovered": 4, 00:13:35.233 "num_base_bdevs_operational": 4, 00:13:35.233 "base_bdevs_list": [ 00:13:35.233 { 00:13:35.233 "name": "pt1", 00:13:35.233 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.233 "is_configured": true, 00:13:35.233 "data_offset": 2048, 00:13:35.233 "data_size": 63488 00:13:35.233 }, 00:13:35.233 { 00:13:35.233 "name": "pt2", 00:13:35.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.233 "is_configured": true, 00:13:35.233 "data_offset": 2048, 00:13:35.233 "data_size": 63488 00:13:35.233 }, 00:13:35.233 { 00:13:35.233 "name": "pt3", 00:13:35.233 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.233 "is_configured": true, 00:13:35.233 "data_offset": 2048, 00:13:35.233 "data_size": 63488 00:13:35.233 }, 00:13:35.233 { 00:13:35.233 "name": "pt4", 00:13:35.233 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:35.233 "is_configured": true, 00:13:35.233 "data_offset": 2048, 00:13:35.233 "data_size": 63488 00:13:35.233 } 00:13:35.233 ] 00:13:35.233 }' 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.233 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.803 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:35.803 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:35.803 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:35.803 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:35.803 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:35.803 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:35.803 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.803 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:35.803 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.803 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.803 [2024-11-05 11:29:34.811943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.803 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.803 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:35.803 "name": "raid_bdev1", 00:13:35.803 "aliases": [ 00:13:35.803 "a39feabe-d4ee-44c6-a602-24f1989fd2cd" 00:13:35.803 ], 00:13:35.803 "product_name": "Raid Volume", 00:13:35.803 "block_size": 512, 00:13:35.803 "num_blocks": 63488, 00:13:35.803 "uuid": "a39feabe-d4ee-44c6-a602-24f1989fd2cd", 00:13:35.803 "assigned_rate_limits": { 00:13:35.803 "rw_ios_per_sec": 0, 00:13:35.803 "rw_mbytes_per_sec": 0, 00:13:35.803 "r_mbytes_per_sec": 0, 00:13:35.803 "w_mbytes_per_sec": 0 00:13:35.803 }, 00:13:35.803 "claimed": false, 00:13:35.803 "zoned": false, 00:13:35.803 "supported_io_types": { 00:13:35.803 "read": true, 00:13:35.803 "write": true, 00:13:35.803 "unmap": false, 00:13:35.803 "flush": false, 00:13:35.803 "reset": true, 00:13:35.803 "nvme_admin": false, 00:13:35.803 "nvme_io": false, 00:13:35.803 "nvme_io_md": false, 00:13:35.803 "write_zeroes": true, 00:13:35.803 "zcopy": false, 00:13:35.803 "get_zone_info": false, 00:13:35.803 "zone_management": false, 00:13:35.803 "zone_append": false, 00:13:35.803 "compare": false, 00:13:35.803 "compare_and_write": false, 00:13:35.803 "abort": false, 00:13:35.804 "seek_hole": false, 00:13:35.804 "seek_data": false, 00:13:35.804 "copy": false, 00:13:35.804 "nvme_iov_md": false 00:13:35.804 }, 00:13:35.804 "memory_domains": [ 00:13:35.804 { 00:13:35.804 "dma_device_id": "system", 00:13:35.804 "dma_device_type": 1 00:13:35.804 }, 00:13:35.804 { 00:13:35.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.804 "dma_device_type": 2 00:13:35.804 }, 00:13:35.804 { 00:13:35.804 "dma_device_id": "system", 00:13:35.804 "dma_device_type": 1 00:13:35.804 }, 00:13:35.804 { 00:13:35.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.804 "dma_device_type": 2 00:13:35.804 }, 00:13:35.804 { 00:13:35.804 "dma_device_id": "system", 00:13:35.804 "dma_device_type": 1 00:13:35.804 }, 00:13:35.804 { 00:13:35.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.804 "dma_device_type": 2 00:13:35.804 }, 00:13:35.804 { 00:13:35.804 "dma_device_id": "system", 00:13:35.804 "dma_device_type": 1 00:13:35.804 }, 00:13:35.804 { 00:13:35.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.804 "dma_device_type": 2 00:13:35.804 } 00:13:35.804 ], 00:13:35.804 "driver_specific": { 00:13:35.804 "raid": { 00:13:35.804 "uuid": "a39feabe-d4ee-44c6-a602-24f1989fd2cd", 00:13:35.804 "strip_size_kb": 0, 00:13:35.804 "state": "online", 00:13:35.804 "raid_level": "raid1", 00:13:35.804 "superblock": true, 00:13:35.804 "num_base_bdevs": 4, 00:13:35.804 "num_base_bdevs_discovered": 4, 00:13:35.804 "num_base_bdevs_operational": 4, 00:13:35.804 "base_bdevs_list": [ 00:13:35.804 { 00:13:35.804 "name": "pt1", 00:13:35.804 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.804 "is_configured": true, 00:13:35.804 "data_offset": 2048, 00:13:35.804 "data_size": 63488 00:13:35.804 }, 00:13:35.804 { 00:13:35.804 "name": "pt2", 00:13:35.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.804 "is_configured": true, 00:13:35.804 "data_offset": 2048, 00:13:35.804 "data_size": 63488 00:13:35.804 }, 00:13:35.804 { 00:13:35.804 "name": "pt3", 00:13:35.804 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.804 "is_configured": true, 00:13:35.804 "data_offset": 2048, 00:13:35.804 "data_size": 63488 00:13:35.804 }, 00:13:35.804 { 00:13:35.804 "name": "pt4", 00:13:35.804 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:35.804 "is_configured": true, 00:13:35.804 "data_offset": 2048, 00:13:35.804 "data_size": 63488 00:13:35.804 } 00:13:35.804 ] 00:13:35.804 } 00:13:35.804 } 00:13:35.804 }' 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:35.804 pt2 00:13:35.804 pt3 00:13:35.804 pt4' 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.804 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.804 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.804 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.804 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.804 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.804 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:35.804 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.804 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.804 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.804 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.064 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:36.065 [2024-11-05 11:29:35.143399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a39feabe-d4ee-44c6-a602-24f1989fd2cd '!=' a39feabe-d4ee-44c6-a602-24f1989fd2cd ']' 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.065 [2024-11-05 11:29:35.191046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.065 "name": "raid_bdev1", 00:13:36.065 "uuid": "a39feabe-d4ee-44c6-a602-24f1989fd2cd", 00:13:36.065 "strip_size_kb": 0, 00:13:36.065 "state": "online", 00:13:36.065 "raid_level": "raid1", 00:13:36.065 "superblock": true, 00:13:36.065 "num_base_bdevs": 4, 00:13:36.065 "num_base_bdevs_discovered": 3, 00:13:36.065 "num_base_bdevs_operational": 3, 00:13:36.065 "base_bdevs_list": [ 00:13:36.065 { 00:13:36.065 "name": null, 00:13:36.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.065 "is_configured": false, 00:13:36.065 "data_offset": 0, 00:13:36.065 "data_size": 63488 00:13:36.065 }, 00:13:36.065 { 00:13:36.065 "name": "pt2", 00:13:36.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.065 "is_configured": true, 00:13:36.065 "data_offset": 2048, 00:13:36.065 "data_size": 63488 00:13:36.065 }, 00:13:36.065 { 00:13:36.065 "name": "pt3", 00:13:36.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.065 "is_configured": true, 00:13:36.065 "data_offset": 2048, 00:13:36.065 "data_size": 63488 00:13:36.065 }, 00:13:36.065 { 00:13:36.065 "name": "pt4", 00:13:36.065 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:36.065 "is_configured": true, 00:13:36.065 "data_offset": 2048, 00:13:36.065 "data_size": 63488 00:13:36.065 } 00:13:36.065 ] 00:13:36.065 }' 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.065 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.635 [2024-11-05 11:29:35.698140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.635 [2024-11-05 11:29:35.698239] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.635 [2024-11-05 11:29:35.698348] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.635 [2024-11-05 11:29:35.698466] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.635 [2024-11-05 11:29:35.698539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.635 [2024-11-05 11:29:35.781946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:36.635 [2024-11-05 11:29:35.781999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.635 [2024-11-05 11:29:35.782017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:36.635 [2024-11-05 11:29:35.782026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.635 [2024-11-05 11:29:35.784380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.635 [2024-11-05 11:29:35.784455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:36.635 [2024-11-05 11:29:35.784540] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:36.635 [2024-11-05 11:29:35.784585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:36.635 pt2 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.635 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.636 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.636 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.636 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.636 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.636 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.636 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.636 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.636 "name": "raid_bdev1", 00:13:36.636 "uuid": "a39feabe-d4ee-44c6-a602-24f1989fd2cd", 00:13:36.636 "strip_size_kb": 0, 00:13:36.636 "state": "configuring", 00:13:36.636 "raid_level": "raid1", 00:13:36.636 "superblock": true, 00:13:36.636 "num_base_bdevs": 4, 00:13:36.636 "num_base_bdevs_discovered": 1, 00:13:36.636 "num_base_bdevs_operational": 3, 00:13:36.636 "base_bdevs_list": [ 00:13:36.636 { 00:13:36.636 "name": null, 00:13:36.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.636 "is_configured": false, 00:13:36.636 "data_offset": 2048, 00:13:36.636 "data_size": 63488 00:13:36.636 }, 00:13:36.636 { 00:13:36.636 "name": "pt2", 00:13:36.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.636 "is_configured": true, 00:13:36.636 "data_offset": 2048, 00:13:36.636 "data_size": 63488 00:13:36.636 }, 00:13:36.636 { 00:13:36.636 "name": null, 00:13:36.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.636 "is_configured": false, 00:13:36.636 "data_offset": 2048, 00:13:36.636 "data_size": 63488 00:13:36.636 }, 00:13:36.636 { 00:13:36.636 "name": null, 00:13:36.636 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:36.636 "is_configured": false, 00:13:36.636 "data_offset": 2048, 00:13:36.636 "data_size": 63488 00:13:36.636 } 00:13:36.636 ] 00:13:36.636 }' 00:13:36.636 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.636 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.206 [2024-11-05 11:29:36.273195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:37.206 [2024-11-05 11:29:36.273336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.206 [2024-11-05 11:29:36.273377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:37.206 [2024-11-05 11:29:36.273407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.206 [2024-11-05 11:29:36.273954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.206 [2024-11-05 11:29:36.274020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:37.206 [2024-11-05 11:29:36.274184] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:37.206 [2024-11-05 11:29:36.274237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:37.206 pt3 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.206 "name": "raid_bdev1", 00:13:37.206 "uuid": "a39feabe-d4ee-44c6-a602-24f1989fd2cd", 00:13:37.206 "strip_size_kb": 0, 00:13:37.206 "state": "configuring", 00:13:37.206 "raid_level": "raid1", 00:13:37.206 "superblock": true, 00:13:37.206 "num_base_bdevs": 4, 00:13:37.206 "num_base_bdevs_discovered": 2, 00:13:37.206 "num_base_bdevs_operational": 3, 00:13:37.206 "base_bdevs_list": [ 00:13:37.206 { 00:13:37.206 "name": null, 00:13:37.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.206 "is_configured": false, 00:13:37.206 "data_offset": 2048, 00:13:37.206 "data_size": 63488 00:13:37.206 }, 00:13:37.206 { 00:13:37.206 "name": "pt2", 00:13:37.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.206 "is_configured": true, 00:13:37.206 "data_offset": 2048, 00:13:37.206 "data_size": 63488 00:13:37.206 }, 00:13:37.206 { 00:13:37.206 "name": "pt3", 00:13:37.206 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.206 "is_configured": true, 00:13:37.206 "data_offset": 2048, 00:13:37.206 "data_size": 63488 00:13:37.206 }, 00:13:37.206 { 00:13:37.206 "name": null, 00:13:37.206 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:37.206 "is_configured": false, 00:13:37.206 "data_offset": 2048, 00:13:37.206 "data_size": 63488 00:13:37.206 } 00:13:37.206 ] 00:13:37.206 }' 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.206 11:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.466 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:37.466 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:37.466 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:37.466 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:37.466 11:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.466 11:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.727 [2024-11-05 11:29:36.744466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:37.727 [2024-11-05 11:29:36.744546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.727 [2024-11-05 11:29:36.744573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:37.727 [2024-11-05 11:29:36.744585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.727 [2024-11-05 11:29:36.745106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.727 [2024-11-05 11:29:36.745140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:37.727 [2024-11-05 11:29:36.745237] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:37.727 [2024-11-05 11:29:36.745285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:37.727 [2024-11-05 11:29:36.745449] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:37.727 [2024-11-05 11:29:36.745460] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:37.727 [2024-11-05 11:29:36.745750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:37.727 [2024-11-05 11:29:36.745928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:37.727 [2024-11-05 11:29:36.745941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:37.727 [2024-11-05 11:29:36.746087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.727 pt4 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.727 "name": "raid_bdev1", 00:13:37.727 "uuid": "a39feabe-d4ee-44c6-a602-24f1989fd2cd", 00:13:37.727 "strip_size_kb": 0, 00:13:37.727 "state": "online", 00:13:37.727 "raid_level": "raid1", 00:13:37.727 "superblock": true, 00:13:37.727 "num_base_bdevs": 4, 00:13:37.727 "num_base_bdevs_discovered": 3, 00:13:37.727 "num_base_bdevs_operational": 3, 00:13:37.727 "base_bdevs_list": [ 00:13:37.727 { 00:13:37.727 "name": null, 00:13:37.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.727 "is_configured": false, 00:13:37.727 "data_offset": 2048, 00:13:37.727 "data_size": 63488 00:13:37.727 }, 00:13:37.727 { 00:13:37.727 "name": "pt2", 00:13:37.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.727 "is_configured": true, 00:13:37.727 "data_offset": 2048, 00:13:37.727 "data_size": 63488 00:13:37.727 }, 00:13:37.727 { 00:13:37.727 "name": "pt3", 00:13:37.727 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.727 "is_configured": true, 00:13:37.727 "data_offset": 2048, 00:13:37.727 "data_size": 63488 00:13:37.727 }, 00:13:37.727 { 00:13:37.727 "name": "pt4", 00:13:37.727 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:37.727 "is_configured": true, 00:13:37.727 "data_offset": 2048, 00:13:37.727 "data_size": 63488 00:13:37.727 } 00:13:37.727 ] 00:13:37.727 }' 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.727 11:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.988 [2024-11-05 11:29:37.171690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.988 [2024-11-05 11:29:37.171780] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.988 [2024-11-05 11:29:37.171908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.988 [2024-11-05 11:29:37.171997] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.988 [2024-11-05 11:29:37.172048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.988 [2024-11-05 11:29:37.247546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:37.988 [2024-11-05 11:29:37.247612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.988 [2024-11-05 11:29:37.247631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:37.988 [2024-11-05 11:29:37.247642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.988 [2024-11-05 11:29:37.249855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.988 [2024-11-05 11:29:37.249897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:37.988 [2024-11-05 11:29:37.249977] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:37.988 [2024-11-05 11:29:37.250024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:37.988 [2024-11-05 11:29:37.250153] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:37.988 [2024-11-05 11:29:37.250167] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.988 [2024-11-05 11:29:37.250182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:37.988 [2024-11-05 11:29:37.250259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:37.988 [2024-11-05 11:29:37.250373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:37.988 pt1 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.988 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.248 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.248 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.248 "name": "raid_bdev1", 00:13:38.248 "uuid": "a39feabe-d4ee-44c6-a602-24f1989fd2cd", 00:13:38.248 "strip_size_kb": 0, 00:13:38.248 "state": "configuring", 00:13:38.248 "raid_level": "raid1", 00:13:38.248 "superblock": true, 00:13:38.248 "num_base_bdevs": 4, 00:13:38.248 "num_base_bdevs_discovered": 2, 00:13:38.248 "num_base_bdevs_operational": 3, 00:13:38.248 "base_bdevs_list": [ 00:13:38.248 { 00:13:38.248 "name": null, 00:13:38.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.248 "is_configured": false, 00:13:38.248 "data_offset": 2048, 00:13:38.248 "data_size": 63488 00:13:38.248 }, 00:13:38.248 { 00:13:38.248 "name": "pt2", 00:13:38.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.248 "is_configured": true, 00:13:38.248 "data_offset": 2048, 00:13:38.248 "data_size": 63488 00:13:38.248 }, 00:13:38.248 { 00:13:38.248 "name": "pt3", 00:13:38.248 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.248 "is_configured": true, 00:13:38.248 "data_offset": 2048, 00:13:38.248 "data_size": 63488 00:13:38.248 }, 00:13:38.248 { 00:13:38.248 "name": null, 00:13:38.248 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:38.248 "is_configured": false, 00:13:38.248 "data_offset": 2048, 00:13:38.248 "data_size": 63488 00:13:38.248 } 00:13:38.248 ] 00:13:38.248 }' 00:13:38.248 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.248 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.508 [2024-11-05 11:29:37.690830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:38.508 [2024-11-05 11:29:37.690934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.508 [2024-11-05 11:29:37.690973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:38.508 [2024-11-05 11:29:37.691002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.508 [2024-11-05 11:29:37.691499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.508 [2024-11-05 11:29:37.691558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:38.508 [2024-11-05 11:29:37.691677] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:38.508 [2024-11-05 11:29:37.691745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:38.508 [2024-11-05 11:29:37.691926] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:38.508 [2024-11-05 11:29:37.691964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:38.508 [2024-11-05 11:29:37.692241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:38.508 [2024-11-05 11:29:37.692439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:38.508 [2024-11-05 11:29:37.692483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:38.508 [2024-11-05 11:29:37.692646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.508 pt4 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.508 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.508 "name": "raid_bdev1", 00:13:38.508 "uuid": "a39feabe-d4ee-44c6-a602-24f1989fd2cd", 00:13:38.508 "strip_size_kb": 0, 00:13:38.508 "state": "online", 00:13:38.508 "raid_level": "raid1", 00:13:38.508 "superblock": true, 00:13:38.508 "num_base_bdevs": 4, 00:13:38.508 "num_base_bdevs_discovered": 3, 00:13:38.509 "num_base_bdevs_operational": 3, 00:13:38.509 "base_bdevs_list": [ 00:13:38.509 { 00:13:38.509 "name": null, 00:13:38.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.509 "is_configured": false, 00:13:38.509 "data_offset": 2048, 00:13:38.509 "data_size": 63488 00:13:38.509 }, 00:13:38.509 { 00:13:38.509 "name": "pt2", 00:13:38.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.509 "is_configured": true, 00:13:38.509 "data_offset": 2048, 00:13:38.509 "data_size": 63488 00:13:38.509 }, 00:13:38.509 { 00:13:38.509 "name": "pt3", 00:13:38.509 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.509 "is_configured": true, 00:13:38.509 "data_offset": 2048, 00:13:38.509 "data_size": 63488 00:13:38.509 }, 00:13:38.509 { 00:13:38.509 "name": "pt4", 00:13:38.509 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:38.509 "is_configured": true, 00:13:38.509 "data_offset": 2048, 00:13:38.509 "data_size": 63488 00:13:38.509 } 00:13:38.509 ] 00:13:38.509 }' 00:13:38.509 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.509 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.078 [2024-11-05 11:29:38.170366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a39feabe-d4ee-44c6-a602-24f1989fd2cd '!=' a39feabe-d4ee-44c6-a602-24f1989fd2cd ']' 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74631 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74631 ']' 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74631 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74631 00:13:39.078 killing process with pid 74631 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74631' 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74631 00:13:39.078 [2024-11-05 11:29:38.254535] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:39.078 [2024-11-05 11:29:38.254637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.078 11:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74631 00:13:39.078 [2024-11-05 11:29:38.254712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.078 [2024-11-05 11:29:38.254725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:39.647 [2024-11-05 11:29:38.644524] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:40.587 11:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:40.587 00:13:40.587 real 0m8.557s 00:13:40.587 user 0m13.530s 00:13:40.587 sys 0m1.562s 00:13:40.587 11:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:40.587 11:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.587 ************************************ 00:13:40.587 END TEST raid_superblock_test 00:13:40.587 ************************************ 00:13:40.587 11:29:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:13:40.587 11:29:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:40.587 11:29:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:40.587 11:29:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:40.587 ************************************ 00:13:40.587 START TEST raid_read_error_test 00:13:40.587 ************************************ 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XKjFW0K7L6 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75118 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:40.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75118 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 75118 ']' 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:40.587 11:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.846 [2024-11-05 11:29:39.929735] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:13:40.846 [2024-11-05 11:29:39.929845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75118 ] 00:13:40.846 [2024-11-05 11:29:40.099197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.106 [2024-11-05 11:29:40.216395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.366 [2024-11-05 11:29:40.429775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.366 [2024-11-05 11:29:40.429813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.626 BaseBdev1_malloc 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.626 true 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.626 [2024-11-05 11:29:40.836364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:41.626 [2024-11-05 11:29:40.836463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.626 [2024-11-05 11:29:40.836503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:41.626 [2024-11-05 11:29:40.836514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.626 [2024-11-05 11:29:40.838692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.626 [2024-11-05 11:29:40.838738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:41.626 BaseBdev1 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.626 BaseBdev2_malloc 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.626 true 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.626 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.886 [2024-11-05 11:29:40.904614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:41.886 [2024-11-05 11:29:40.904678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.886 [2024-11-05 11:29:40.904697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:41.886 [2024-11-05 11:29:40.904708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.886 [2024-11-05 11:29:40.906834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.886 [2024-11-05 11:29:40.906873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:41.886 BaseBdev2 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.886 BaseBdev3_malloc 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.886 true 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.886 [2024-11-05 11:29:40.983458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:41.886 [2024-11-05 11:29:40.983511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.886 [2024-11-05 11:29:40.983527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:41.886 [2024-11-05 11:29:40.983537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.886 [2024-11-05 11:29:40.985630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.886 [2024-11-05 11:29:40.985735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:41.886 BaseBdev3 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.886 11:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.886 BaseBdev4_malloc 00:13:41.886 11:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.886 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.887 true 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.887 [2024-11-05 11:29:41.052454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:41.887 [2024-11-05 11:29:41.052559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.887 [2024-11-05 11:29:41.052599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:41.887 [2024-11-05 11:29:41.052640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.887 [2024-11-05 11:29:41.054942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.887 [2024-11-05 11:29:41.055034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:41.887 BaseBdev4 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.887 [2024-11-05 11:29:41.064485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.887 [2024-11-05 11:29:41.066358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.887 [2024-11-05 11:29:41.066490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:41.887 [2024-11-05 11:29:41.066581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:41.887 [2024-11-05 11:29:41.066879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:41.887 [2024-11-05 11:29:41.066933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:41.887 [2024-11-05 11:29:41.067259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:41.887 [2024-11-05 11:29:41.067494] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:41.887 [2024-11-05 11:29:41.067541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:41.887 [2024-11-05 11:29:41.067764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.887 "name": "raid_bdev1", 00:13:41.887 "uuid": "12a89344-02d9-40f0-92e0-351b178a2d72", 00:13:41.887 "strip_size_kb": 0, 00:13:41.887 "state": "online", 00:13:41.887 "raid_level": "raid1", 00:13:41.887 "superblock": true, 00:13:41.887 "num_base_bdevs": 4, 00:13:41.887 "num_base_bdevs_discovered": 4, 00:13:41.887 "num_base_bdevs_operational": 4, 00:13:41.887 "base_bdevs_list": [ 00:13:41.887 { 00:13:41.887 "name": "BaseBdev1", 00:13:41.887 "uuid": "3405ebb0-4714-53fc-9634-b4d7d220a6bb", 00:13:41.887 "is_configured": true, 00:13:41.887 "data_offset": 2048, 00:13:41.887 "data_size": 63488 00:13:41.887 }, 00:13:41.887 { 00:13:41.887 "name": "BaseBdev2", 00:13:41.887 "uuid": "7586a8c5-5404-592a-94fd-c28a22c5b476", 00:13:41.887 "is_configured": true, 00:13:41.887 "data_offset": 2048, 00:13:41.887 "data_size": 63488 00:13:41.887 }, 00:13:41.887 { 00:13:41.887 "name": "BaseBdev3", 00:13:41.887 "uuid": "4e31cdfc-8255-5b81-9958-0417924ca5ba", 00:13:41.887 "is_configured": true, 00:13:41.887 "data_offset": 2048, 00:13:41.887 "data_size": 63488 00:13:41.887 }, 00:13:41.887 { 00:13:41.887 "name": "BaseBdev4", 00:13:41.887 "uuid": "15b04872-b6cd-5075-b762-ca92c71d2883", 00:13:41.887 "is_configured": true, 00:13:41.887 "data_offset": 2048, 00:13:41.887 "data_size": 63488 00:13:41.887 } 00:13:41.887 ] 00:13:41.887 }' 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.887 11:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.457 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:42.457 11:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:42.457 [2024-11-05 11:29:41.549099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.396 "name": "raid_bdev1", 00:13:43.396 "uuid": "12a89344-02d9-40f0-92e0-351b178a2d72", 00:13:43.396 "strip_size_kb": 0, 00:13:43.396 "state": "online", 00:13:43.396 "raid_level": "raid1", 00:13:43.396 "superblock": true, 00:13:43.396 "num_base_bdevs": 4, 00:13:43.396 "num_base_bdevs_discovered": 4, 00:13:43.396 "num_base_bdevs_operational": 4, 00:13:43.396 "base_bdevs_list": [ 00:13:43.396 { 00:13:43.396 "name": "BaseBdev1", 00:13:43.396 "uuid": "3405ebb0-4714-53fc-9634-b4d7d220a6bb", 00:13:43.396 "is_configured": true, 00:13:43.396 "data_offset": 2048, 00:13:43.396 "data_size": 63488 00:13:43.396 }, 00:13:43.396 { 00:13:43.396 "name": "BaseBdev2", 00:13:43.396 "uuid": "7586a8c5-5404-592a-94fd-c28a22c5b476", 00:13:43.396 "is_configured": true, 00:13:43.396 "data_offset": 2048, 00:13:43.396 "data_size": 63488 00:13:43.396 }, 00:13:43.396 { 00:13:43.396 "name": "BaseBdev3", 00:13:43.396 "uuid": "4e31cdfc-8255-5b81-9958-0417924ca5ba", 00:13:43.396 "is_configured": true, 00:13:43.396 "data_offset": 2048, 00:13:43.396 "data_size": 63488 00:13:43.396 }, 00:13:43.396 { 00:13:43.396 "name": "BaseBdev4", 00:13:43.396 "uuid": "15b04872-b6cd-5075-b762-ca92c71d2883", 00:13:43.396 "is_configured": true, 00:13:43.396 "data_offset": 2048, 00:13:43.396 "data_size": 63488 00:13:43.396 } 00:13:43.396 ] 00:13:43.396 }' 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.396 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.656 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.656 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.656 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.656 [2024-11-05 11:29:42.862874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.656 [2024-11-05 11:29:42.862957] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.656 [2024-11-05 11:29:42.865793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.656 [2024-11-05 11:29:42.865897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.656 [2024-11-05 11:29:42.866041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.656 [2024-11-05 11:29:42.866092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:43.656 { 00:13:43.656 "results": [ 00:13:43.656 { 00:13:43.656 "job": "raid_bdev1", 00:13:43.656 "core_mask": "0x1", 00:13:43.656 "workload": "randrw", 00:13:43.656 "percentage": 50, 00:13:43.656 "status": "finished", 00:13:43.656 "queue_depth": 1, 00:13:43.656 "io_size": 131072, 00:13:43.656 "runtime": 1.314517, 00:13:43.656 "iops": 10543.79669490771, 00:13:43.656 "mibps": 1317.9745868634639, 00:13:43.656 "io_failed": 0, 00:13:43.656 "io_timeout": 0, 00:13:43.656 "avg_latency_us": 92.16544433732207, 00:13:43.656 "min_latency_us": 23.699563318777294, 00:13:43.656 "max_latency_us": 1781.4917030567685 00:13:43.656 } 00:13:43.656 ], 00:13:43.656 "core_count": 1 00:13:43.656 } 00:13:43.656 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.656 11:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75118 00:13:43.656 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 75118 ']' 00:13:43.656 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 75118 00:13:43.656 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:13:43.656 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:43.656 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75118 00:13:43.656 killing process with pid 75118 00:13:43.656 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:43.656 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:43.656 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75118' 00:13:43.656 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 75118 00:13:43.656 [2024-11-05 11:29:42.915404] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:43.656 11:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 75118 00:13:44.225 [2024-11-05 11:29:43.252362] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:45.161 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XKjFW0K7L6 00:13:45.161 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:45.161 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:45.161 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:45.161 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:45.161 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:45.161 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:45.161 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:45.161 00:13:45.161 real 0m4.600s 00:13:45.161 user 0m5.386s 00:13:45.161 sys 0m0.568s 00:13:45.161 11:29:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:45.161 ************************************ 00:13:45.161 END TEST raid_read_error_test 00:13:45.161 ************************************ 00:13:45.161 11:29:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.420 11:29:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:13:45.420 11:29:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:45.420 11:29:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:45.420 11:29:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.420 ************************************ 00:13:45.420 START TEST raid_write_error_test 00:13:45.420 ************************************ 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.n1XVdRSX0x 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75264 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75264 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 75264 ']' 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:45.420 11:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.420 [2024-11-05 11:29:44.611022] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:13:45.420 [2024-11-05 11:29:44.611150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75264 ] 00:13:45.680 [2024-11-05 11:29:44.782677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.680 [2024-11-05 11:29:44.907430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.938 [2024-11-05 11:29:45.110366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.938 [2024-11-05 11:29:45.110435] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.198 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:46.198 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:46.198 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:46.198 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:46.198 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.198 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.458 BaseBdev1_malloc 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.458 true 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.458 [2024-11-05 11:29:45.511064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:46.458 [2024-11-05 11:29:45.511179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.458 [2024-11-05 11:29:45.511220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:46.458 [2024-11-05 11:29:45.511257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.458 [2024-11-05 11:29:45.513356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.458 [2024-11-05 11:29:45.513430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:46.458 BaseBdev1 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.458 BaseBdev2_malloc 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.458 true 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.458 [2024-11-05 11:29:45.576252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:46.458 [2024-11-05 11:29:45.576354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.458 [2024-11-05 11:29:45.576389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:46.458 [2024-11-05 11:29:45.576399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.458 [2024-11-05 11:29:45.578377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.458 [2024-11-05 11:29:45.578414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:46.458 BaseBdev2 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.458 BaseBdev3_malloc 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.458 true 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.458 [2024-11-05 11:29:45.674192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:46.458 [2024-11-05 11:29:45.674244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.458 [2024-11-05 11:29:45.674264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:46.458 [2024-11-05 11:29:45.674275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.458 [2024-11-05 11:29:45.676426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.458 [2024-11-05 11:29:45.676537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:46.458 BaseBdev3 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.458 BaseBdev4_malloc 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.458 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.716 true 00:13:46.716 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.716 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:46.716 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.716 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.716 [2024-11-05 11:29:45.740791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:46.716 [2024-11-05 11:29:45.740888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.716 [2024-11-05 11:29:45.740929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:46.716 [2024-11-05 11:29:45.740939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.716 [2024-11-05 11:29:45.743115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.716 [2024-11-05 11:29:45.743169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:46.716 BaseBdev4 00:13:46.716 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.716 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:46.716 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.716 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.716 [2024-11-05 11:29:45.752818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.716 [2024-11-05 11:29:45.754637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.716 [2024-11-05 11:29:45.754713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:46.716 [2024-11-05 11:29:45.754777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:46.717 [2024-11-05 11:29:45.755000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:46.717 [2024-11-05 11:29:45.755023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:46.717 [2024-11-05 11:29:45.755318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:46.717 [2024-11-05 11:29:45.755490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:46.717 [2024-11-05 11:29:45.755505] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:46.717 [2024-11-05 11:29:45.755669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.717 "name": "raid_bdev1", 00:13:46.717 "uuid": "cdade121-a776-4e71-8ff9-f8c2d75cb868", 00:13:46.717 "strip_size_kb": 0, 00:13:46.717 "state": "online", 00:13:46.717 "raid_level": "raid1", 00:13:46.717 "superblock": true, 00:13:46.717 "num_base_bdevs": 4, 00:13:46.717 "num_base_bdevs_discovered": 4, 00:13:46.717 "num_base_bdevs_operational": 4, 00:13:46.717 "base_bdevs_list": [ 00:13:46.717 { 00:13:46.717 "name": "BaseBdev1", 00:13:46.717 "uuid": "60cc8807-ec31-5fe4-b12c-f883f8973d42", 00:13:46.717 "is_configured": true, 00:13:46.717 "data_offset": 2048, 00:13:46.717 "data_size": 63488 00:13:46.717 }, 00:13:46.717 { 00:13:46.717 "name": "BaseBdev2", 00:13:46.717 "uuid": "836ac2c6-cdba-55f7-bcbd-48cf97756272", 00:13:46.717 "is_configured": true, 00:13:46.717 "data_offset": 2048, 00:13:46.717 "data_size": 63488 00:13:46.717 }, 00:13:46.717 { 00:13:46.717 "name": "BaseBdev3", 00:13:46.717 "uuid": "16ab15c9-f539-5fae-b08d-763c6642267a", 00:13:46.717 "is_configured": true, 00:13:46.717 "data_offset": 2048, 00:13:46.717 "data_size": 63488 00:13:46.717 }, 00:13:46.717 { 00:13:46.717 "name": "BaseBdev4", 00:13:46.717 "uuid": "615f48f6-8a84-51d2-9f70-b540505774af", 00:13:46.717 "is_configured": true, 00:13:46.717 "data_offset": 2048, 00:13:46.717 "data_size": 63488 00:13:46.717 } 00:13:46.717 ] 00:13:46.717 }' 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.717 11:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.975 11:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:46.975 11:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:47.234 [2024-11-05 11:29:46.305332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:48.170 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:48.170 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.170 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.170 [2024-11-05 11:29:47.228623] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:48.170 [2024-11-05 11:29:47.228684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:48.170 [2024-11-05 11:29:47.228920] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:48.170 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.170 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:48.170 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:48.170 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.171 "name": "raid_bdev1", 00:13:48.171 "uuid": "cdade121-a776-4e71-8ff9-f8c2d75cb868", 00:13:48.171 "strip_size_kb": 0, 00:13:48.171 "state": "online", 00:13:48.171 "raid_level": "raid1", 00:13:48.171 "superblock": true, 00:13:48.171 "num_base_bdevs": 4, 00:13:48.171 "num_base_bdevs_discovered": 3, 00:13:48.171 "num_base_bdevs_operational": 3, 00:13:48.171 "base_bdevs_list": [ 00:13:48.171 { 00:13:48.171 "name": null, 00:13:48.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.171 "is_configured": false, 00:13:48.171 "data_offset": 0, 00:13:48.171 "data_size": 63488 00:13:48.171 }, 00:13:48.171 { 00:13:48.171 "name": "BaseBdev2", 00:13:48.171 "uuid": "836ac2c6-cdba-55f7-bcbd-48cf97756272", 00:13:48.171 "is_configured": true, 00:13:48.171 "data_offset": 2048, 00:13:48.171 "data_size": 63488 00:13:48.171 }, 00:13:48.171 { 00:13:48.171 "name": "BaseBdev3", 00:13:48.171 "uuid": "16ab15c9-f539-5fae-b08d-763c6642267a", 00:13:48.171 "is_configured": true, 00:13:48.171 "data_offset": 2048, 00:13:48.171 "data_size": 63488 00:13:48.171 }, 00:13:48.171 { 00:13:48.171 "name": "BaseBdev4", 00:13:48.171 "uuid": "615f48f6-8a84-51d2-9f70-b540505774af", 00:13:48.171 "is_configured": true, 00:13:48.171 "data_offset": 2048, 00:13:48.171 "data_size": 63488 00:13:48.171 } 00:13:48.171 ] 00:13:48.171 }' 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.171 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.738 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:48.738 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.738 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.738 [2024-11-05 11:29:47.713394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:48.738 [2024-11-05 11:29:47.713424] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:48.738 [2024-11-05 11:29:47.716077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.738 [2024-11-05 11:29:47.716121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.738 [2024-11-05 11:29:47.716313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.739 [2024-11-05 11:29:47.716366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:48.739 { 00:13:48.739 "results": [ 00:13:48.739 { 00:13:48.739 "job": "raid_bdev1", 00:13:48.739 "core_mask": "0x1", 00:13:48.739 "workload": "randrw", 00:13:48.739 "percentage": 50, 00:13:48.739 "status": "finished", 00:13:48.739 "queue_depth": 1, 00:13:48.739 "io_size": 131072, 00:13:48.739 "runtime": 1.408898, 00:13:48.739 "iops": 11355.683661982628, 00:13:48.739 "mibps": 1419.4604577478285, 00:13:48.739 "io_failed": 0, 00:13:48.739 "io_timeout": 0, 00:13:48.739 "avg_latency_us": 85.3550010631123, 00:13:48.739 "min_latency_us": 23.58777292576419, 00:13:48.739 "max_latency_us": 1609.7816593886462 00:13:48.739 } 00:13:48.739 ], 00:13:48.739 "core_count": 1 00:13:48.739 } 00:13:48.739 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.739 11:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75264 00:13:48.739 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 75264 ']' 00:13:48.739 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 75264 00:13:48.739 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:13:48.739 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:48.739 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75264 00:13:48.739 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:48.739 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:48.739 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75264' 00:13:48.739 killing process with pid 75264 00:13:48.739 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 75264 00:13:48.739 [2024-11-05 11:29:47.754099] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.739 11:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 75264 00:13:48.999 [2024-11-05 11:29:48.092364] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.381 11:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.n1XVdRSX0x 00:13:50.381 11:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:50.381 11:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:50.381 11:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:50.381 11:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:50.381 11:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:50.381 11:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:50.381 11:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:50.381 00:13:50.381 real 0m4.821s 00:13:50.381 user 0m5.708s 00:13:50.381 sys 0m0.583s 00:13:50.381 11:29:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:50.381 11:29:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.381 ************************************ 00:13:50.381 END TEST raid_write_error_test 00:13:50.381 ************************************ 00:13:50.381 11:29:49 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:50.381 11:29:49 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:50.381 11:29:49 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:50.381 11:29:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:50.381 11:29:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:50.381 11:29:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.381 ************************************ 00:13:50.381 START TEST raid_rebuild_test 00:13:50.381 ************************************ 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:50.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75407 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75407 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75407 ']' 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:50.381 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.381 [2024-11-05 11:29:49.498952] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:13:50.381 [2024-11-05 11:29:49.499189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:50.381 Zero copy mechanism will not be used. 00:13:50.381 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75407 ] 00:13:50.640 [2024-11-05 11:29:49.673196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.640 [2024-11-05 11:29:49.790114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.899 [2024-11-05 11:29:49.994395] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.899 [2024-11-05 11:29:49.994516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.159 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:51.159 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:13:51.159 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:51.159 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:51.159 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.159 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.159 BaseBdev1_malloc 00:13:51.159 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.159 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:51.159 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.159 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.159 [2024-11-05 11:29:50.397327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:51.159 [2024-11-05 11:29:50.397406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.159 [2024-11-05 11:29:50.397429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:51.159 [2024-11-05 11:29:50.397441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.159 [2024-11-05 11:29:50.399654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.159 [2024-11-05 11:29:50.399696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:51.159 BaseBdev1 00:13:51.159 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.159 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:51.159 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:51.159 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.159 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.419 BaseBdev2_malloc 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.419 [2024-11-05 11:29:50.453107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:51.419 [2024-11-05 11:29:50.453195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.419 [2024-11-05 11:29:50.453216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:51.419 [2024-11-05 11:29:50.453226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.419 [2024-11-05 11:29:50.455447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.419 [2024-11-05 11:29:50.455489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:51.419 BaseBdev2 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.419 spare_malloc 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.419 spare_delay 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.419 [2024-11-05 11:29:50.534516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:51.419 [2024-11-05 11:29:50.534582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.419 [2024-11-05 11:29:50.534603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:51.419 [2024-11-05 11:29:50.534614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.419 [2024-11-05 11:29:50.536872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.419 [2024-11-05 11:29:50.536912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:51.419 spare 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.419 [2024-11-05 11:29:50.546558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.419 [2024-11-05 11:29:50.548525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.419 [2024-11-05 11:29:50.548613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:51.419 [2024-11-05 11:29:50.548627] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:51.419 [2024-11-05 11:29:50.548877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:51.419 [2024-11-05 11:29:50.549032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:51.419 [2024-11-05 11:29:50.549043] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:51.419 [2024-11-05 11:29:50.549239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.419 "name": "raid_bdev1", 00:13:51.419 "uuid": "dc7fb871-8593-42c0-8ab5-94fd9e025187", 00:13:51.419 "strip_size_kb": 0, 00:13:51.419 "state": "online", 00:13:51.419 "raid_level": "raid1", 00:13:51.419 "superblock": false, 00:13:51.419 "num_base_bdevs": 2, 00:13:51.419 "num_base_bdevs_discovered": 2, 00:13:51.419 "num_base_bdevs_operational": 2, 00:13:51.419 "base_bdevs_list": [ 00:13:51.419 { 00:13:51.419 "name": "BaseBdev1", 00:13:51.419 "uuid": "39021d62-538e-57f0-8fe2-4a5c159259fd", 00:13:51.419 "is_configured": true, 00:13:51.419 "data_offset": 0, 00:13:51.419 "data_size": 65536 00:13:51.419 }, 00:13:51.419 { 00:13:51.419 "name": "BaseBdev2", 00:13:51.419 "uuid": "cc74cbd0-3888-5fea-971d-24f5770809d1", 00:13:51.419 "is_configured": true, 00:13:51.419 "data_offset": 0, 00:13:51.419 "data_size": 65536 00:13:51.419 } 00:13:51.419 ] 00:13:51.419 }' 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.419 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.988 [2024-11-05 11:29:51.026000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:51.988 11:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:52.248 [2024-11-05 11:29:51.309305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:52.248 /dev/nbd0 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:52.248 1+0 records in 00:13:52.248 1+0 records out 00:13:52.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457579 s, 9.0 MB/s 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:52.248 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:56.440 65536+0 records in 00:13:56.440 65536+0 records out 00:13:56.440 33554432 bytes (34 MB, 32 MiB) copied, 3.83853 s, 8.7 MB/s 00:13:56.440 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:56.440 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.440 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:56.440 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:56.441 [2024-11-05 11:29:55.411863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.441 [2024-11-05 11:29:55.451893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.441 "name": "raid_bdev1", 00:13:56.441 "uuid": "dc7fb871-8593-42c0-8ab5-94fd9e025187", 00:13:56.441 "strip_size_kb": 0, 00:13:56.441 "state": "online", 00:13:56.441 "raid_level": "raid1", 00:13:56.441 "superblock": false, 00:13:56.441 "num_base_bdevs": 2, 00:13:56.441 "num_base_bdevs_discovered": 1, 00:13:56.441 "num_base_bdevs_operational": 1, 00:13:56.441 "base_bdevs_list": [ 00:13:56.441 { 00:13:56.441 "name": null, 00:13:56.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.441 "is_configured": false, 00:13:56.441 "data_offset": 0, 00:13:56.441 "data_size": 65536 00:13:56.441 }, 00:13:56.441 { 00:13:56.441 "name": "BaseBdev2", 00:13:56.441 "uuid": "cc74cbd0-3888-5fea-971d-24f5770809d1", 00:13:56.441 "is_configured": true, 00:13:56.441 "data_offset": 0, 00:13:56.441 "data_size": 65536 00:13:56.441 } 00:13:56.441 ] 00:13:56.441 }' 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.441 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.700 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:56.700 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.700 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.700 [2024-11-05 11:29:55.899214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:56.700 [2024-11-05 11:29:55.915643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:56.700 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.700 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:56.700 [2024-11-05 11:29:55.917502] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:58.080 11:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.080 11:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.080 11:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.080 11:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.080 11:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.080 11:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.080 11:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.080 11:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.080 11:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.080 11:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.080 11:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.080 "name": "raid_bdev1", 00:13:58.080 "uuid": "dc7fb871-8593-42c0-8ab5-94fd9e025187", 00:13:58.080 "strip_size_kb": 0, 00:13:58.080 "state": "online", 00:13:58.080 "raid_level": "raid1", 00:13:58.080 "superblock": false, 00:13:58.080 "num_base_bdevs": 2, 00:13:58.080 "num_base_bdevs_discovered": 2, 00:13:58.080 "num_base_bdevs_operational": 2, 00:13:58.080 "process": { 00:13:58.080 "type": "rebuild", 00:13:58.080 "target": "spare", 00:13:58.080 "progress": { 00:13:58.080 "blocks": 20480, 00:13:58.080 "percent": 31 00:13:58.080 } 00:13:58.080 }, 00:13:58.080 "base_bdevs_list": [ 00:13:58.080 { 00:13:58.080 "name": "spare", 00:13:58.080 "uuid": "c1b02a82-5ecb-57d1-a557-dd338cdba60f", 00:13:58.080 "is_configured": true, 00:13:58.080 "data_offset": 0, 00:13:58.080 "data_size": 65536 00:13:58.080 }, 00:13:58.080 { 00:13:58.080 "name": "BaseBdev2", 00:13:58.080 "uuid": "cc74cbd0-3888-5fea-971d-24f5770809d1", 00:13:58.080 "is_configured": true, 00:13:58.080 "data_offset": 0, 00:13:58.080 "data_size": 65536 00:13:58.080 } 00:13:58.080 ] 00:13:58.080 }' 00:13:58.080 11:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.080 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.080 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.080 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.080 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:58.080 11:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.080 11:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.080 [2024-11-05 11:29:57.080988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.080 [2024-11-05 11:29:57.122623] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:58.080 [2024-11-05 11:29:57.122747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.080 [2024-11-05 11:29:57.122782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.080 [2024-11-05 11:29:57.122805] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.081 "name": "raid_bdev1", 00:13:58.081 "uuid": "dc7fb871-8593-42c0-8ab5-94fd9e025187", 00:13:58.081 "strip_size_kb": 0, 00:13:58.081 "state": "online", 00:13:58.081 "raid_level": "raid1", 00:13:58.081 "superblock": false, 00:13:58.081 "num_base_bdevs": 2, 00:13:58.081 "num_base_bdevs_discovered": 1, 00:13:58.081 "num_base_bdevs_operational": 1, 00:13:58.081 "base_bdevs_list": [ 00:13:58.081 { 00:13:58.081 "name": null, 00:13:58.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.081 "is_configured": false, 00:13:58.081 "data_offset": 0, 00:13:58.081 "data_size": 65536 00:13:58.081 }, 00:13:58.081 { 00:13:58.081 "name": "BaseBdev2", 00:13:58.081 "uuid": "cc74cbd0-3888-5fea-971d-24f5770809d1", 00:13:58.081 "is_configured": true, 00:13:58.081 "data_offset": 0, 00:13:58.081 "data_size": 65536 00:13:58.081 } 00:13:58.081 ] 00:13:58.081 }' 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.081 11:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.340 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.340 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.340 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.340 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.340 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.340 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.340 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.340 11:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.340 11:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.340 11:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.340 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.340 "name": "raid_bdev1", 00:13:58.340 "uuid": "dc7fb871-8593-42c0-8ab5-94fd9e025187", 00:13:58.340 "strip_size_kb": 0, 00:13:58.340 "state": "online", 00:13:58.340 "raid_level": "raid1", 00:13:58.340 "superblock": false, 00:13:58.340 "num_base_bdevs": 2, 00:13:58.340 "num_base_bdevs_discovered": 1, 00:13:58.340 "num_base_bdevs_operational": 1, 00:13:58.340 "base_bdevs_list": [ 00:13:58.340 { 00:13:58.341 "name": null, 00:13:58.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.341 "is_configured": false, 00:13:58.341 "data_offset": 0, 00:13:58.341 "data_size": 65536 00:13:58.341 }, 00:13:58.341 { 00:13:58.341 "name": "BaseBdev2", 00:13:58.341 "uuid": "cc74cbd0-3888-5fea-971d-24f5770809d1", 00:13:58.341 "is_configured": true, 00:13:58.341 "data_offset": 0, 00:13:58.341 "data_size": 65536 00:13:58.341 } 00:13:58.341 ] 00:13:58.341 }' 00:13:58.341 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.602 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.602 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.602 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.602 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:58.602 11:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.602 11:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.602 [2024-11-05 11:29:57.693575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:58.602 [2024-11-05 11:29:57.710016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:58.602 11:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.602 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:58.602 [2024-11-05 11:29:57.711967] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:59.541 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.541 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.541 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.541 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.541 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.541 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.541 11:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.541 11:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.541 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.541 11:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.541 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.541 "name": "raid_bdev1", 00:13:59.541 "uuid": "dc7fb871-8593-42c0-8ab5-94fd9e025187", 00:13:59.541 "strip_size_kb": 0, 00:13:59.541 "state": "online", 00:13:59.541 "raid_level": "raid1", 00:13:59.541 "superblock": false, 00:13:59.541 "num_base_bdevs": 2, 00:13:59.541 "num_base_bdevs_discovered": 2, 00:13:59.541 "num_base_bdevs_operational": 2, 00:13:59.541 "process": { 00:13:59.541 "type": "rebuild", 00:13:59.541 "target": "spare", 00:13:59.541 "progress": { 00:13:59.541 "blocks": 20480, 00:13:59.541 "percent": 31 00:13:59.541 } 00:13:59.541 }, 00:13:59.541 "base_bdevs_list": [ 00:13:59.541 { 00:13:59.541 "name": "spare", 00:13:59.541 "uuid": "c1b02a82-5ecb-57d1-a557-dd338cdba60f", 00:13:59.541 "is_configured": true, 00:13:59.541 "data_offset": 0, 00:13:59.541 "data_size": 65536 00:13:59.541 }, 00:13:59.541 { 00:13:59.541 "name": "BaseBdev2", 00:13:59.541 "uuid": "cc74cbd0-3888-5fea-971d-24f5770809d1", 00:13:59.541 "is_configured": true, 00:13:59.541 "data_offset": 0, 00:13:59.541 "data_size": 65536 00:13:59.541 } 00:13:59.541 ] 00:13:59.541 }' 00:13:59.541 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.541 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=368 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.801 "name": "raid_bdev1", 00:13:59.801 "uuid": "dc7fb871-8593-42c0-8ab5-94fd9e025187", 00:13:59.801 "strip_size_kb": 0, 00:13:59.801 "state": "online", 00:13:59.801 "raid_level": "raid1", 00:13:59.801 "superblock": false, 00:13:59.801 "num_base_bdevs": 2, 00:13:59.801 "num_base_bdevs_discovered": 2, 00:13:59.801 "num_base_bdevs_operational": 2, 00:13:59.801 "process": { 00:13:59.801 "type": "rebuild", 00:13:59.801 "target": "spare", 00:13:59.801 "progress": { 00:13:59.801 "blocks": 22528, 00:13:59.801 "percent": 34 00:13:59.801 } 00:13:59.801 }, 00:13:59.801 "base_bdevs_list": [ 00:13:59.801 { 00:13:59.801 "name": "spare", 00:13:59.801 "uuid": "c1b02a82-5ecb-57d1-a557-dd338cdba60f", 00:13:59.801 "is_configured": true, 00:13:59.801 "data_offset": 0, 00:13:59.801 "data_size": 65536 00:13:59.801 }, 00:13:59.801 { 00:13:59.801 "name": "BaseBdev2", 00:13:59.801 "uuid": "cc74cbd0-3888-5fea-971d-24f5770809d1", 00:13:59.801 "is_configured": true, 00:13:59.801 "data_offset": 0, 00:13:59.801 "data_size": 65536 00:13:59.801 } 00:13:59.801 ] 00:13:59.801 }' 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.801 11:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.801 11:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.801 11:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.182 "name": "raid_bdev1", 00:14:01.182 "uuid": "dc7fb871-8593-42c0-8ab5-94fd9e025187", 00:14:01.182 "strip_size_kb": 0, 00:14:01.182 "state": "online", 00:14:01.182 "raid_level": "raid1", 00:14:01.182 "superblock": false, 00:14:01.182 "num_base_bdevs": 2, 00:14:01.182 "num_base_bdevs_discovered": 2, 00:14:01.182 "num_base_bdevs_operational": 2, 00:14:01.182 "process": { 00:14:01.182 "type": "rebuild", 00:14:01.182 "target": "spare", 00:14:01.182 "progress": { 00:14:01.182 "blocks": 47104, 00:14:01.182 "percent": 71 00:14:01.182 } 00:14:01.182 }, 00:14:01.182 "base_bdevs_list": [ 00:14:01.182 { 00:14:01.182 "name": "spare", 00:14:01.182 "uuid": "c1b02a82-5ecb-57d1-a557-dd338cdba60f", 00:14:01.182 "is_configured": true, 00:14:01.182 "data_offset": 0, 00:14:01.182 "data_size": 65536 00:14:01.182 }, 00:14:01.182 { 00:14:01.182 "name": "BaseBdev2", 00:14:01.182 "uuid": "cc74cbd0-3888-5fea-971d-24f5770809d1", 00:14:01.182 "is_configured": true, 00:14:01.182 "data_offset": 0, 00:14:01.182 "data_size": 65536 00:14:01.182 } 00:14:01.182 ] 00:14:01.182 }' 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.182 11:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:01.752 [2024-11-05 11:30:00.925250] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:01.752 [2024-11-05 11:30:00.925402] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:01.752 [2024-11-05 11:30:00.925498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.011 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.011 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.011 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.011 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.011 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.011 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.011 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.011 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.011 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.011 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.011 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.011 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.011 "name": "raid_bdev1", 00:14:02.011 "uuid": "dc7fb871-8593-42c0-8ab5-94fd9e025187", 00:14:02.011 "strip_size_kb": 0, 00:14:02.011 "state": "online", 00:14:02.011 "raid_level": "raid1", 00:14:02.011 "superblock": false, 00:14:02.011 "num_base_bdevs": 2, 00:14:02.011 "num_base_bdevs_discovered": 2, 00:14:02.011 "num_base_bdevs_operational": 2, 00:14:02.011 "base_bdevs_list": [ 00:14:02.011 { 00:14:02.011 "name": "spare", 00:14:02.011 "uuid": "c1b02a82-5ecb-57d1-a557-dd338cdba60f", 00:14:02.011 "is_configured": true, 00:14:02.011 "data_offset": 0, 00:14:02.011 "data_size": 65536 00:14:02.011 }, 00:14:02.011 { 00:14:02.011 "name": "BaseBdev2", 00:14:02.011 "uuid": "cc74cbd0-3888-5fea-971d-24f5770809d1", 00:14:02.011 "is_configured": true, 00:14:02.011 "data_offset": 0, 00:14:02.011 "data_size": 65536 00:14:02.011 } 00:14:02.011 ] 00:14:02.011 }' 00:14:02.011 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.011 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:02.011 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.271 "name": "raid_bdev1", 00:14:02.271 "uuid": "dc7fb871-8593-42c0-8ab5-94fd9e025187", 00:14:02.271 "strip_size_kb": 0, 00:14:02.271 "state": "online", 00:14:02.271 "raid_level": "raid1", 00:14:02.271 "superblock": false, 00:14:02.271 "num_base_bdevs": 2, 00:14:02.271 "num_base_bdevs_discovered": 2, 00:14:02.271 "num_base_bdevs_operational": 2, 00:14:02.271 "base_bdevs_list": [ 00:14:02.271 { 00:14:02.271 "name": "spare", 00:14:02.271 "uuid": "c1b02a82-5ecb-57d1-a557-dd338cdba60f", 00:14:02.271 "is_configured": true, 00:14:02.271 "data_offset": 0, 00:14:02.271 "data_size": 65536 00:14:02.271 }, 00:14:02.271 { 00:14:02.271 "name": "BaseBdev2", 00:14:02.271 "uuid": "cc74cbd0-3888-5fea-971d-24f5770809d1", 00:14:02.271 "is_configured": true, 00:14:02.271 "data_offset": 0, 00:14:02.271 "data_size": 65536 00:14:02.271 } 00:14:02.271 ] 00:14:02.271 }' 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.271 "name": "raid_bdev1", 00:14:02.271 "uuid": "dc7fb871-8593-42c0-8ab5-94fd9e025187", 00:14:02.271 "strip_size_kb": 0, 00:14:02.271 "state": "online", 00:14:02.271 "raid_level": "raid1", 00:14:02.271 "superblock": false, 00:14:02.271 "num_base_bdevs": 2, 00:14:02.271 "num_base_bdevs_discovered": 2, 00:14:02.271 "num_base_bdevs_operational": 2, 00:14:02.271 "base_bdevs_list": [ 00:14:02.271 { 00:14:02.271 "name": "spare", 00:14:02.271 "uuid": "c1b02a82-5ecb-57d1-a557-dd338cdba60f", 00:14:02.271 "is_configured": true, 00:14:02.271 "data_offset": 0, 00:14:02.271 "data_size": 65536 00:14:02.271 }, 00:14:02.271 { 00:14:02.271 "name": "BaseBdev2", 00:14:02.271 "uuid": "cc74cbd0-3888-5fea-971d-24f5770809d1", 00:14:02.271 "is_configured": true, 00:14:02.271 "data_offset": 0, 00:14:02.271 "data_size": 65536 00:14:02.271 } 00:14:02.271 ] 00:14:02.271 }' 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.271 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.840 [2024-11-05 11:30:01.890715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.840 [2024-11-05 11:30:01.890809] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.840 [2024-11-05 11:30:01.890938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.840 [2024-11-05 11:30:01.891035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.840 [2024-11-05 11:30:01.891092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:02.840 11:30:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:03.100 /dev/nbd0 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.100 1+0 records in 00:14:03.100 1+0 records out 00:14:03.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361769 s, 11.3 MB/s 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:03.100 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:03.360 /dev/nbd1 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.360 1+0 records in 00:14:03.360 1+0 records out 00:14:03.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302855 s, 13.5 MB/s 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.360 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:03.620 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:03.620 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:03.620 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:03.620 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.620 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.620 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:03.620 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:03.620 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.620 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.620 11:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75407 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75407 ']' 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75407 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75407 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:03.879 killing process with pid 75407 00:14:03.879 Received shutdown signal, test time was about 60.000000 seconds 00:14:03.879 00:14:03.879 Latency(us) 00:14:03.879 [2024-11-05T11:30:03.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.879 [2024-11-05T11:30:03.153Z] =================================================================================================================== 00:14:03.879 [2024-11-05T11:30:03.153Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75407' 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75407 00:14:03.879 [2024-11-05 11:30:03.086557] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.879 11:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75407 00:14:04.138 [2024-11-05 11:30:03.385626] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:05.520 11:30:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:05.520 00:14:05.520 real 0m15.069s 00:14:05.520 user 0m17.296s 00:14:05.520 sys 0m2.880s 00:14:05.520 11:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:05.520 ************************************ 00:14:05.520 END TEST raid_rebuild_test 00:14:05.520 ************************************ 00:14:05.520 11:30:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.520 11:30:04 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:14:05.520 11:30:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:05.520 11:30:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:05.520 11:30:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:05.520 ************************************ 00:14:05.520 START TEST raid_rebuild_test_sb 00:14:05.520 ************************************ 00:14:05.520 11:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:14:05.520 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:05.520 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:05.520 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:05.520 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75821 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75821 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 75821 ']' 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:05.521 11:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.521 [2024-11-05 11:30:04.633310] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:14:05.521 [2024-11-05 11:30:04.633877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75821 ] 00:14:05.521 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:05.521 Zero copy mechanism will not be used. 00:14:05.780 [2024-11-05 11:30:04.806073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.780 [2024-11-05 11:30:04.919395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.039 [2024-11-05 11:30:05.116119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.039 [2024-11-05 11:30:05.116239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.299 BaseBdev1_malloc 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.299 [2024-11-05 11:30:05.510394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:06.299 [2024-11-05 11:30:05.510460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.299 [2024-11-05 11:30:05.510484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:06.299 [2024-11-05 11:30:05.510495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.299 [2024-11-05 11:30:05.512590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.299 [2024-11-05 11:30:05.512683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:06.299 BaseBdev1 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.299 BaseBdev2_malloc 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.299 [2024-11-05 11:30:05.564414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:06.299 [2024-11-05 11:30:05.564468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.299 [2024-11-05 11:30:05.564487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:06.299 [2024-11-05 11:30:05.564499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.299 [2024-11-05 11:30:05.566494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.299 [2024-11-05 11:30:05.566532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:06.299 BaseBdev2 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.299 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:06.300 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.300 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.560 spare_malloc 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.560 spare_delay 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.560 [2024-11-05 11:30:05.642291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:06.560 [2024-11-05 11:30:05.642349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.560 [2024-11-05 11:30:05.642366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:06.560 [2024-11-05 11:30:05.642377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.560 [2024-11-05 11:30:05.644385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.560 [2024-11-05 11:30:05.644482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:06.560 spare 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.560 [2024-11-05 11:30:05.654330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:06.560 [2024-11-05 11:30:05.656097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:06.560 [2024-11-05 11:30:05.656315] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:06.560 [2024-11-05 11:30:05.656366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:06.560 [2024-11-05 11:30:05.656615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:06.560 [2024-11-05 11:30:05.656811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:06.560 [2024-11-05 11:30:05.656850] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:06.560 [2024-11-05 11:30:05.657034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.560 "name": "raid_bdev1", 00:14:06.560 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:06.560 "strip_size_kb": 0, 00:14:06.560 "state": "online", 00:14:06.560 "raid_level": "raid1", 00:14:06.560 "superblock": true, 00:14:06.560 "num_base_bdevs": 2, 00:14:06.560 "num_base_bdevs_discovered": 2, 00:14:06.560 "num_base_bdevs_operational": 2, 00:14:06.560 "base_bdevs_list": [ 00:14:06.560 { 00:14:06.560 "name": "BaseBdev1", 00:14:06.560 "uuid": "f7f0c521-6ddd-548a-a79e-1b68ad745cea", 00:14:06.560 "is_configured": true, 00:14:06.560 "data_offset": 2048, 00:14:06.560 "data_size": 63488 00:14:06.560 }, 00:14:06.560 { 00:14:06.560 "name": "BaseBdev2", 00:14:06.560 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:06.560 "is_configured": true, 00:14:06.560 "data_offset": 2048, 00:14:06.560 "data_size": 63488 00:14:06.560 } 00:14:06.560 ] 00:14:06.560 }' 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.560 11:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.130 [2024-11-05 11:30:06.117800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.130 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:07.130 [2024-11-05 11:30:06.385114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:07.130 /dev/nbd0 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.390 1+0 records in 00:14:07.390 1+0 records out 00:14:07.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460382 s, 8.9 MB/s 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:07.390 11:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:11.607 63488+0 records in 00:14:11.607 63488+0 records out 00:14:11.607 32505856 bytes (33 MB, 31 MiB) copied, 3.63929 s, 8.9 MB/s 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:11.607 [2024-11-05 11:30:10.286186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.607 [2024-11-05 11:30:10.302282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.607 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.607 "name": "raid_bdev1", 00:14:11.607 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:11.607 "strip_size_kb": 0, 00:14:11.607 "state": "online", 00:14:11.607 "raid_level": "raid1", 00:14:11.607 "superblock": true, 00:14:11.607 "num_base_bdevs": 2, 00:14:11.607 "num_base_bdevs_discovered": 1, 00:14:11.607 "num_base_bdevs_operational": 1, 00:14:11.607 "base_bdevs_list": [ 00:14:11.607 { 00:14:11.607 "name": null, 00:14:11.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.607 "is_configured": false, 00:14:11.607 "data_offset": 0, 00:14:11.607 "data_size": 63488 00:14:11.608 }, 00:14:11.608 { 00:14:11.608 "name": "BaseBdev2", 00:14:11.608 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:11.608 "is_configured": true, 00:14:11.608 "data_offset": 2048, 00:14:11.608 "data_size": 63488 00:14:11.608 } 00:14:11.608 ] 00:14:11.608 }' 00:14:11.608 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.608 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.608 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:11.608 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.608 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.608 [2024-11-05 11:30:10.677675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.608 [2024-11-05 11:30:10.693917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:14:11.608 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.608 [2024-11-05 11:30:10.695875] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:11.608 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:12.546 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.546 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.547 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.547 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.547 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.547 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.547 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.547 11:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.547 11:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.547 11:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.547 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.547 "name": "raid_bdev1", 00:14:12.547 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:12.547 "strip_size_kb": 0, 00:14:12.547 "state": "online", 00:14:12.547 "raid_level": "raid1", 00:14:12.547 "superblock": true, 00:14:12.547 "num_base_bdevs": 2, 00:14:12.547 "num_base_bdevs_discovered": 2, 00:14:12.547 "num_base_bdevs_operational": 2, 00:14:12.547 "process": { 00:14:12.547 "type": "rebuild", 00:14:12.547 "target": "spare", 00:14:12.547 "progress": { 00:14:12.547 "blocks": 20480, 00:14:12.547 "percent": 32 00:14:12.547 } 00:14:12.547 }, 00:14:12.547 "base_bdevs_list": [ 00:14:12.547 { 00:14:12.547 "name": "spare", 00:14:12.547 "uuid": "e3ae6dab-f970-5a92-806d-81753e2d7ddd", 00:14:12.547 "is_configured": true, 00:14:12.547 "data_offset": 2048, 00:14:12.547 "data_size": 63488 00:14:12.547 }, 00:14:12.547 { 00:14:12.547 "name": "BaseBdev2", 00:14:12.547 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:12.547 "is_configured": true, 00:14:12.547 "data_offset": 2048, 00:14:12.547 "data_size": 63488 00:14:12.547 } 00:14:12.547 ] 00:14:12.547 }' 00:14:12.547 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.547 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.547 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.806 [2024-11-05 11:30:11.835318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.806 [2024-11-05 11:30:11.901025] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:12.806 [2024-11-05 11:30:11.901084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.806 [2024-11-05 11:30:11.901099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.806 [2024-11-05 11:30:11.901108] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.806 "name": "raid_bdev1", 00:14:12.806 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:12.806 "strip_size_kb": 0, 00:14:12.806 "state": "online", 00:14:12.806 "raid_level": "raid1", 00:14:12.806 "superblock": true, 00:14:12.806 "num_base_bdevs": 2, 00:14:12.806 "num_base_bdevs_discovered": 1, 00:14:12.806 "num_base_bdevs_operational": 1, 00:14:12.806 "base_bdevs_list": [ 00:14:12.806 { 00:14:12.806 "name": null, 00:14:12.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.806 "is_configured": false, 00:14:12.806 "data_offset": 0, 00:14:12.806 "data_size": 63488 00:14:12.806 }, 00:14:12.806 { 00:14:12.806 "name": "BaseBdev2", 00:14:12.806 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:12.806 "is_configured": true, 00:14:12.806 "data_offset": 2048, 00:14:12.806 "data_size": 63488 00:14:12.806 } 00:14:12.806 ] 00:14:12.806 }' 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.806 11:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.377 "name": "raid_bdev1", 00:14:13.377 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:13.377 "strip_size_kb": 0, 00:14:13.377 "state": "online", 00:14:13.377 "raid_level": "raid1", 00:14:13.377 "superblock": true, 00:14:13.377 "num_base_bdevs": 2, 00:14:13.377 "num_base_bdevs_discovered": 1, 00:14:13.377 "num_base_bdevs_operational": 1, 00:14:13.377 "base_bdevs_list": [ 00:14:13.377 { 00:14:13.377 "name": null, 00:14:13.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.377 "is_configured": false, 00:14:13.377 "data_offset": 0, 00:14:13.377 "data_size": 63488 00:14:13.377 }, 00:14:13.377 { 00:14:13.377 "name": "BaseBdev2", 00:14:13.377 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:13.377 "is_configured": true, 00:14:13.377 "data_offset": 2048, 00:14:13.377 "data_size": 63488 00:14:13.377 } 00:14:13.377 ] 00:14:13.377 }' 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.377 [2024-11-05 11:30:12.594169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.377 [2024-11-05 11:30:12.609455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.377 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:13.377 [2024-11-05 11:30:12.611277] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.758 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.758 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.758 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.758 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.758 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.758 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.758 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.758 11:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.758 11:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.758 11:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.758 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.758 "name": "raid_bdev1", 00:14:14.758 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:14.758 "strip_size_kb": 0, 00:14:14.758 "state": "online", 00:14:14.758 "raid_level": "raid1", 00:14:14.758 "superblock": true, 00:14:14.758 "num_base_bdevs": 2, 00:14:14.758 "num_base_bdevs_discovered": 2, 00:14:14.758 "num_base_bdevs_operational": 2, 00:14:14.758 "process": { 00:14:14.758 "type": "rebuild", 00:14:14.758 "target": "spare", 00:14:14.758 "progress": { 00:14:14.758 "blocks": 20480, 00:14:14.758 "percent": 32 00:14:14.758 } 00:14:14.758 }, 00:14:14.758 "base_bdevs_list": [ 00:14:14.758 { 00:14:14.758 "name": "spare", 00:14:14.758 "uuid": "e3ae6dab-f970-5a92-806d-81753e2d7ddd", 00:14:14.758 "is_configured": true, 00:14:14.758 "data_offset": 2048, 00:14:14.758 "data_size": 63488 00:14:14.758 }, 00:14:14.758 { 00:14:14.758 "name": "BaseBdev2", 00:14:14.758 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:14.758 "is_configured": true, 00:14:14.759 "data_offset": 2048, 00:14:14.759 "data_size": 63488 00:14:14.759 } 00:14:14.759 ] 00:14:14.759 }' 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:14.759 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=383 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.759 "name": "raid_bdev1", 00:14:14.759 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:14.759 "strip_size_kb": 0, 00:14:14.759 "state": "online", 00:14:14.759 "raid_level": "raid1", 00:14:14.759 "superblock": true, 00:14:14.759 "num_base_bdevs": 2, 00:14:14.759 "num_base_bdevs_discovered": 2, 00:14:14.759 "num_base_bdevs_operational": 2, 00:14:14.759 "process": { 00:14:14.759 "type": "rebuild", 00:14:14.759 "target": "spare", 00:14:14.759 "progress": { 00:14:14.759 "blocks": 22528, 00:14:14.759 "percent": 35 00:14:14.759 } 00:14:14.759 }, 00:14:14.759 "base_bdevs_list": [ 00:14:14.759 { 00:14:14.759 "name": "spare", 00:14:14.759 "uuid": "e3ae6dab-f970-5a92-806d-81753e2d7ddd", 00:14:14.759 "is_configured": true, 00:14:14.759 "data_offset": 2048, 00:14:14.759 "data_size": 63488 00:14:14.759 }, 00:14:14.759 { 00:14:14.759 "name": "BaseBdev2", 00:14:14.759 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:14.759 "is_configured": true, 00:14:14.759 "data_offset": 2048, 00:14:14.759 "data_size": 63488 00:14:14.759 } 00:14:14.759 ] 00:14:14.759 }' 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.759 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:15.698 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:15.698 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.698 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.698 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.698 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.698 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.698 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.698 11:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.698 11:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.698 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.698 11:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.698 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.698 "name": "raid_bdev1", 00:14:15.698 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:15.698 "strip_size_kb": 0, 00:14:15.698 "state": "online", 00:14:15.698 "raid_level": "raid1", 00:14:15.698 "superblock": true, 00:14:15.698 "num_base_bdevs": 2, 00:14:15.698 "num_base_bdevs_discovered": 2, 00:14:15.698 "num_base_bdevs_operational": 2, 00:14:15.698 "process": { 00:14:15.698 "type": "rebuild", 00:14:15.698 "target": "spare", 00:14:15.698 "progress": { 00:14:15.698 "blocks": 45056, 00:14:15.698 "percent": 70 00:14:15.698 } 00:14:15.698 }, 00:14:15.698 "base_bdevs_list": [ 00:14:15.698 { 00:14:15.698 "name": "spare", 00:14:15.698 "uuid": "e3ae6dab-f970-5a92-806d-81753e2d7ddd", 00:14:15.698 "is_configured": true, 00:14:15.698 "data_offset": 2048, 00:14:15.698 "data_size": 63488 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "name": "BaseBdev2", 00:14:15.698 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:15.698 "is_configured": true, 00:14:15.698 "data_offset": 2048, 00:14:15.698 "data_size": 63488 00:14:15.698 } 00:14:15.698 ] 00:14:15.698 }' 00:14:15.698 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.698 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.698 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.958 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.958 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:16.527 [2024-11-05 11:30:15.723569] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:16.527 [2024-11-05 11:30:15.723638] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:16.527 [2024-11-05 11:30:15.723734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.787 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.787 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.787 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.787 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.787 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.787 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.787 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.787 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.787 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.787 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.787 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.048 "name": "raid_bdev1", 00:14:17.048 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:17.048 "strip_size_kb": 0, 00:14:17.048 "state": "online", 00:14:17.048 "raid_level": "raid1", 00:14:17.048 "superblock": true, 00:14:17.048 "num_base_bdevs": 2, 00:14:17.048 "num_base_bdevs_discovered": 2, 00:14:17.048 "num_base_bdevs_operational": 2, 00:14:17.048 "base_bdevs_list": [ 00:14:17.048 { 00:14:17.048 "name": "spare", 00:14:17.048 "uuid": "e3ae6dab-f970-5a92-806d-81753e2d7ddd", 00:14:17.048 "is_configured": true, 00:14:17.048 "data_offset": 2048, 00:14:17.048 "data_size": 63488 00:14:17.048 }, 00:14:17.048 { 00:14:17.048 "name": "BaseBdev2", 00:14:17.048 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:17.048 "is_configured": true, 00:14:17.048 "data_offset": 2048, 00:14:17.048 "data_size": 63488 00:14:17.048 } 00:14:17.048 ] 00:14:17.048 }' 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.048 "name": "raid_bdev1", 00:14:17.048 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:17.048 "strip_size_kb": 0, 00:14:17.048 "state": "online", 00:14:17.048 "raid_level": "raid1", 00:14:17.048 "superblock": true, 00:14:17.048 "num_base_bdevs": 2, 00:14:17.048 "num_base_bdevs_discovered": 2, 00:14:17.048 "num_base_bdevs_operational": 2, 00:14:17.048 "base_bdevs_list": [ 00:14:17.048 { 00:14:17.048 "name": "spare", 00:14:17.048 "uuid": "e3ae6dab-f970-5a92-806d-81753e2d7ddd", 00:14:17.048 "is_configured": true, 00:14:17.048 "data_offset": 2048, 00:14:17.048 "data_size": 63488 00:14:17.048 }, 00:14:17.048 { 00:14:17.048 "name": "BaseBdev2", 00:14:17.048 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:17.048 "is_configured": true, 00:14:17.048 "data_offset": 2048, 00:14:17.048 "data_size": 63488 00:14:17.048 } 00:14:17.048 ] 00:14:17.048 }' 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.048 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.307 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.307 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.307 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.307 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.307 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.307 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.307 "name": "raid_bdev1", 00:14:17.307 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:17.307 "strip_size_kb": 0, 00:14:17.307 "state": "online", 00:14:17.307 "raid_level": "raid1", 00:14:17.307 "superblock": true, 00:14:17.307 "num_base_bdevs": 2, 00:14:17.307 "num_base_bdevs_discovered": 2, 00:14:17.307 "num_base_bdevs_operational": 2, 00:14:17.307 "base_bdevs_list": [ 00:14:17.307 { 00:14:17.308 "name": "spare", 00:14:17.308 "uuid": "e3ae6dab-f970-5a92-806d-81753e2d7ddd", 00:14:17.308 "is_configured": true, 00:14:17.308 "data_offset": 2048, 00:14:17.308 "data_size": 63488 00:14:17.308 }, 00:14:17.308 { 00:14:17.308 "name": "BaseBdev2", 00:14:17.308 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:17.308 "is_configured": true, 00:14:17.308 "data_offset": 2048, 00:14:17.308 "data_size": 63488 00:14:17.308 } 00:14:17.308 ] 00:14:17.308 }' 00:14:17.308 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.308 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.567 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:17.567 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.567 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.567 [2024-11-05 11:30:16.804617] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.567 [2024-11-05 11:30:16.804709] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.567 [2024-11-05 11:30:16.804817] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.567 [2024-11-05 11:30:16.804899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.567 [2024-11-05 11:30:16.804989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:17.567 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.567 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:17.567 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.567 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.567 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.567 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.567 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:17.827 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:17.827 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:17.827 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:17.827 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.827 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:17.827 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:17.827 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:17.827 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:17.827 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:17.827 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:17.828 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:17.828 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:17.828 /dev/nbd0 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:17.828 1+0 records in 00:14:17.828 1+0 records out 00:14:17.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473986 s, 8.6 MB/s 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:17.828 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:18.088 /dev/nbd1 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:18.088 1+0 records in 00:14:18.088 1+0 records out 00:14:18.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359343 s, 11.4 MB/s 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:18.088 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:18.347 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:18.347 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.347 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:18.347 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:18.347 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:18.347 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.347 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:18.606 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:18.606 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:18.606 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:18.606 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.606 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.606 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:18.606 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:18.606 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.606 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.606 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.866 [2024-11-05 11:30:17.972238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:18.866 [2024-11-05 11:30:17.972297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.866 [2024-11-05 11:30:17.972322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:18.866 [2024-11-05 11:30:17.972331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.866 [2024-11-05 11:30:17.974478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.866 [2024-11-05 11:30:17.974564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:18.866 [2024-11-05 11:30:17.974680] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:18.866 [2024-11-05 11:30:17.974768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.866 [2024-11-05 11:30:17.974973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.866 spare 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.866 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.866 [2024-11-05 11:30:18.074916] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:18.866 [2024-11-05 11:30:18.074989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:18.866 [2024-11-05 11:30:18.075340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:18.866 [2024-11-05 11:30:18.075532] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:18.866 [2024-11-05 11:30:18.075549] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:18.866 [2024-11-05 11:30:18.075734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.866 "name": "raid_bdev1", 00:14:18.866 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:18.866 "strip_size_kb": 0, 00:14:18.866 "state": "online", 00:14:18.866 "raid_level": "raid1", 00:14:18.866 "superblock": true, 00:14:18.866 "num_base_bdevs": 2, 00:14:18.866 "num_base_bdevs_discovered": 2, 00:14:18.866 "num_base_bdevs_operational": 2, 00:14:18.866 "base_bdevs_list": [ 00:14:18.866 { 00:14:18.866 "name": "spare", 00:14:18.866 "uuid": "e3ae6dab-f970-5a92-806d-81753e2d7ddd", 00:14:18.866 "is_configured": true, 00:14:18.866 "data_offset": 2048, 00:14:18.866 "data_size": 63488 00:14:18.866 }, 00:14:18.866 { 00:14:18.866 "name": "BaseBdev2", 00:14:18.866 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:18.866 "is_configured": true, 00:14:18.866 "data_offset": 2048, 00:14:18.866 "data_size": 63488 00:14:18.866 } 00:14:18.866 ] 00:14:18.866 }' 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.866 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.435 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.435 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.435 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:19.435 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:19.435 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.435 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.435 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.435 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.435 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.435 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.435 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.435 "name": "raid_bdev1", 00:14:19.435 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:19.435 "strip_size_kb": 0, 00:14:19.435 "state": "online", 00:14:19.435 "raid_level": "raid1", 00:14:19.435 "superblock": true, 00:14:19.435 "num_base_bdevs": 2, 00:14:19.435 "num_base_bdevs_discovered": 2, 00:14:19.435 "num_base_bdevs_operational": 2, 00:14:19.435 "base_bdevs_list": [ 00:14:19.435 { 00:14:19.435 "name": "spare", 00:14:19.435 "uuid": "e3ae6dab-f970-5a92-806d-81753e2d7ddd", 00:14:19.435 "is_configured": true, 00:14:19.435 "data_offset": 2048, 00:14:19.435 "data_size": 63488 00:14:19.435 }, 00:14:19.435 { 00:14:19.435 "name": "BaseBdev2", 00:14:19.435 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:19.435 "is_configured": true, 00:14:19.435 "data_offset": 2048, 00:14:19.435 "data_size": 63488 00:14:19.435 } 00:14:19.435 ] 00:14:19.435 }' 00:14:19.435 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.436 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:19.436 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.436 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:19.436 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:19.436 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.436 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.436 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.695 [2024-11-05 11:30:18.735190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.695 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.695 "name": "raid_bdev1", 00:14:19.695 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:19.695 "strip_size_kb": 0, 00:14:19.695 "state": "online", 00:14:19.695 "raid_level": "raid1", 00:14:19.695 "superblock": true, 00:14:19.695 "num_base_bdevs": 2, 00:14:19.695 "num_base_bdevs_discovered": 1, 00:14:19.696 "num_base_bdevs_operational": 1, 00:14:19.696 "base_bdevs_list": [ 00:14:19.696 { 00:14:19.696 "name": null, 00:14:19.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.696 "is_configured": false, 00:14:19.696 "data_offset": 0, 00:14:19.696 "data_size": 63488 00:14:19.696 }, 00:14:19.696 { 00:14:19.696 "name": "BaseBdev2", 00:14:19.696 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:19.696 "is_configured": true, 00:14:19.696 "data_offset": 2048, 00:14:19.696 "data_size": 63488 00:14:19.696 } 00:14:19.696 ] 00:14:19.696 }' 00:14:19.696 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.696 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.955 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:19.955 11:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.955 11:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.955 [2024-11-05 11:30:19.178484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:19.955 [2024-11-05 11:30:19.178736] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:19.955 [2024-11-05 11:30:19.178801] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:19.955 [2024-11-05 11:30:19.178873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:19.955 [2024-11-05 11:30:19.195246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:19.955 11:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.955 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:19.955 [2024-11-05 11:30:19.197199] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:21.350 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.350 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.350 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.350 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.350 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.350 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.350 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.350 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.350 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.350 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.350 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.350 "name": "raid_bdev1", 00:14:21.351 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:21.351 "strip_size_kb": 0, 00:14:21.351 "state": "online", 00:14:21.351 "raid_level": "raid1", 00:14:21.351 "superblock": true, 00:14:21.351 "num_base_bdevs": 2, 00:14:21.351 "num_base_bdevs_discovered": 2, 00:14:21.351 "num_base_bdevs_operational": 2, 00:14:21.351 "process": { 00:14:21.351 "type": "rebuild", 00:14:21.351 "target": "spare", 00:14:21.351 "progress": { 00:14:21.351 "blocks": 20480, 00:14:21.351 "percent": 32 00:14:21.351 } 00:14:21.351 }, 00:14:21.351 "base_bdevs_list": [ 00:14:21.351 { 00:14:21.351 "name": "spare", 00:14:21.351 "uuid": "e3ae6dab-f970-5a92-806d-81753e2d7ddd", 00:14:21.351 "is_configured": true, 00:14:21.351 "data_offset": 2048, 00:14:21.351 "data_size": 63488 00:14:21.351 }, 00:14:21.351 { 00:14:21.351 "name": "BaseBdev2", 00:14:21.351 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:21.351 "is_configured": true, 00:14:21.351 "data_offset": 2048, 00:14:21.351 "data_size": 63488 00:14:21.351 } 00:14:21.351 ] 00:14:21.351 }' 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.351 [2024-11-05 11:30:20.360716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:21.351 [2024-11-05 11:30:20.402245] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:21.351 [2024-11-05 11:30:20.402308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.351 [2024-11-05 11:30:20.402324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:21.351 [2024-11-05 11:30:20.402333] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.351 "name": "raid_bdev1", 00:14:21.351 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:21.351 "strip_size_kb": 0, 00:14:21.351 "state": "online", 00:14:21.351 "raid_level": "raid1", 00:14:21.351 "superblock": true, 00:14:21.351 "num_base_bdevs": 2, 00:14:21.351 "num_base_bdevs_discovered": 1, 00:14:21.351 "num_base_bdevs_operational": 1, 00:14:21.351 "base_bdevs_list": [ 00:14:21.351 { 00:14:21.351 "name": null, 00:14:21.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.351 "is_configured": false, 00:14:21.351 "data_offset": 0, 00:14:21.351 "data_size": 63488 00:14:21.351 }, 00:14:21.351 { 00:14:21.351 "name": "BaseBdev2", 00:14:21.351 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:21.351 "is_configured": true, 00:14:21.351 "data_offset": 2048, 00:14:21.351 "data_size": 63488 00:14:21.351 } 00:14:21.351 ] 00:14:21.351 }' 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.351 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.927 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:21.927 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.927 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.927 [2024-11-05 11:30:20.936625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:21.927 [2024-11-05 11:30:20.936765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.927 [2024-11-05 11:30:20.936806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:21.927 [2024-11-05 11:30:20.936839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.927 [2024-11-05 11:30:20.937348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.927 [2024-11-05 11:30:20.937412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:21.927 [2024-11-05 11:30:20.937546] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:21.927 [2024-11-05 11:30:20.937591] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:21.927 [2024-11-05 11:30:20.937635] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:21.927 [2024-11-05 11:30:20.937698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.927 [2024-11-05 11:30:20.953502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:21.927 spare 00:14:21.927 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.927 [2024-11-05 11:30:20.955402] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:21.927 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:22.884 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.884 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.884 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.884 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.884 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.884 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.884 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.884 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.884 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.884 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.884 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.884 "name": "raid_bdev1", 00:14:22.884 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:22.884 "strip_size_kb": 0, 00:14:22.884 "state": "online", 00:14:22.884 "raid_level": "raid1", 00:14:22.884 "superblock": true, 00:14:22.884 "num_base_bdevs": 2, 00:14:22.884 "num_base_bdevs_discovered": 2, 00:14:22.884 "num_base_bdevs_operational": 2, 00:14:22.884 "process": { 00:14:22.884 "type": "rebuild", 00:14:22.884 "target": "spare", 00:14:22.884 "progress": { 00:14:22.884 "blocks": 20480, 00:14:22.884 "percent": 32 00:14:22.884 } 00:14:22.884 }, 00:14:22.884 "base_bdevs_list": [ 00:14:22.884 { 00:14:22.884 "name": "spare", 00:14:22.884 "uuid": "e3ae6dab-f970-5a92-806d-81753e2d7ddd", 00:14:22.884 "is_configured": true, 00:14:22.884 "data_offset": 2048, 00:14:22.884 "data_size": 63488 00:14:22.884 }, 00:14:22.884 { 00:14:22.884 "name": "BaseBdev2", 00:14:22.884 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:22.884 "is_configured": true, 00:14:22.884 "data_offset": 2048, 00:14:22.884 "data_size": 63488 00:14:22.884 } 00:14:22.884 ] 00:14:22.884 }' 00:14:22.884 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.884 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.884 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.884 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.884 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:22.884 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.884 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.884 [2024-11-05 11:30:22.095215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.144 [2024-11-05 11:30:22.160468] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:23.144 [2024-11-05 11:30:22.160571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.144 [2024-11-05 11:30:22.160610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.144 [2024-11-05 11:30:22.160631] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.144 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.145 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.145 "name": "raid_bdev1", 00:14:23.145 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:23.145 "strip_size_kb": 0, 00:14:23.145 "state": "online", 00:14:23.145 "raid_level": "raid1", 00:14:23.145 "superblock": true, 00:14:23.145 "num_base_bdevs": 2, 00:14:23.145 "num_base_bdevs_discovered": 1, 00:14:23.145 "num_base_bdevs_operational": 1, 00:14:23.145 "base_bdevs_list": [ 00:14:23.145 { 00:14:23.145 "name": null, 00:14:23.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.145 "is_configured": false, 00:14:23.145 "data_offset": 0, 00:14:23.145 "data_size": 63488 00:14:23.145 }, 00:14:23.145 { 00:14:23.145 "name": "BaseBdev2", 00:14:23.145 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:23.145 "is_configured": true, 00:14:23.145 "data_offset": 2048, 00:14:23.145 "data_size": 63488 00:14:23.145 } 00:14:23.145 ] 00:14:23.145 }' 00:14:23.145 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.145 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.404 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.404 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.404 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.404 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.404 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.404 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.404 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.404 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.404 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.404 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.404 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.404 "name": "raid_bdev1", 00:14:23.404 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:23.404 "strip_size_kb": 0, 00:14:23.404 "state": "online", 00:14:23.404 "raid_level": "raid1", 00:14:23.404 "superblock": true, 00:14:23.404 "num_base_bdevs": 2, 00:14:23.404 "num_base_bdevs_discovered": 1, 00:14:23.404 "num_base_bdevs_operational": 1, 00:14:23.404 "base_bdevs_list": [ 00:14:23.404 { 00:14:23.404 "name": null, 00:14:23.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.405 "is_configured": false, 00:14:23.405 "data_offset": 0, 00:14:23.405 "data_size": 63488 00:14:23.405 }, 00:14:23.405 { 00:14:23.405 "name": "BaseBdev2", 00:14:23.405 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:23.405 "is_configured": true, 00:14:23.405 "data_offset": 2048, 00:14:23.405 "data_size": 63488 00:14:23.405 } 00:14:23.405 ] 00:14:23.405 }' 00:14:23.405 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.664 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:23.664 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.664 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:23.664 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:23.664 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.664 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.664 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.664 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:23.664 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.664 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.664 [2024-11-05 11:30:22.750099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:23.664 [2024-11-05 11:30:22.750211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.664 [2024-11-05 11:30:22.750237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:23.665 [2024-11-05 11:30:22.750256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.665 [2024-11-05 11:30:22.750713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.665 [2024-11-05 11:30:22.750730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:23.665 [2024-11-05 11:30:22.750810] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:23.665 [2024-11-05 11:30:22.750824] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:23.665 [2024-11-05 11:30:22.750834] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:23.665 [2024-11-05 11:30:22.750845] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:23.665 BaseBdev1 00:14:23.665 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.665 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.602 "name": "raid_bdev1", 00:14:24.602 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:24.602 "strip_size_kb": 0, 00:14:24.602 "state": "online", 00:14:24.602 "raid_level": "raid1", 00:14:24.602 "superblock": true, 00:14:24.602 "num_base_bdevs": 2, 00:14:24.602 "num_base_bdevs_discovered": 1, 00:14:24.602 "num_base_bdevs_operational": 1, 00:14:24.602 "base_bdevs_list": [ 00:14:24.602 { 00:14:24.602 "name": null, 00:14:24.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.602 "is_configured": false, 00:14:24.602 "data_offset": 0, 00:14:24.602 "data_size": 63488 00:14:24.602 }, 00:14:24.602 { 00:14:24.602 "name": "BaseBdev2", 00:14:24.602 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:24.602 "is_configured": true, 00:14:24.602 "data_offset": 2048, 00:14:24.602 "data_size": 63488 00:14:24.602 } 00:14:24.602 ] 00:14:24.602 }' 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.602 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.172 "name": "raid_bdev1", 00:14:25.172 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:25.172 "strip_size_kb": 0, 00:14:25.172 "state": "online", 00:14:25.172 "raid_level": "raid1", 00:14:25.172 "superblock": true, 00:14:25.172 "num_base_bdevs": 2, 00:14:25.172 "num_base_bdevs_discovered": 1, 00:14:25.172 "num_base_bdevs_operational": 1, 00:14:25.172 "base_bdevs_list": [ 00:14:25.172 { 00:14:25.172 "name": null, 00:14:25.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.172 "is_configured": false, 00:14:25.172 "data_offset": 0, 00:14:25.172 "data_size": 63488 00:14:25.172 }, 00:14:25.172 { 00:14:25.172 "name": "BaseBdev2", 00:14:25.172 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:25.172 "is_configured": true, 00:14:25.172 "data_offset": 2048, 00:14:25.172 "data_size": 63488 00:14:25.172 } 00:14:25.172 ] 00:14:25.172 }' 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.172 [2024-11-05 11:30:24.351402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.172 [2024-11-05 11:30:24.351570] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:25.172 [2024-11-05 11:30:24.351585] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:25.172 request: 00:14:25.172 { 00:14:25.172 "base_bdev": "BaseBdev1", 00:14:25.172 "raid_bdev": "raid_bdev1", 00:14:25.172 "method": "bdev_raid_add_base_bdev", 00:14:25.172 "req_id": 1 00:14:25.172 } 00:14:25.172 Got JSON-RPC error response 00:14:25.172 response: 00:14:25.172 { 00:14:25.172 "code": -22, 00:14:25.172 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:25.172 } 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:25.172 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:26.111 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:26.111 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.112 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.112 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.112 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.112 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:26.112 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.112 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.112 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.112 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.112 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.112 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.112 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.112 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.371 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.371 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.371 "name": "raid_bdev1", 00:14:26.371 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:26.371 "strip_size_kb": 0, 00:14:26.371 "state": "online", 00:14:26.371 "raid_level": "raid1", 00:14:26.371 "superblock": true, 00:14:26.371 "num_base_bdevs": 2, 00:14:26.371 "num_base_bdevs_discovered": 1, 00:14:26.371 "num_base_bdevs_operational": 1, 00:14:26.371 "base_bdevs_list": [ 00:14:26.371 { 00:14:26.371 "name": null, 00:14:26.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.371 "is_configured": false, 00:14:26.371 "data_offset": 0, 00:14:26.371 "data_size": 63488 00:14:26.371 }, 00:14:26.371 { 00:14:26.371 "name": "BaseBdev2", 00:14:26.371 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:26.371 "is_configured": true, 00:14:26.371 "data_offset": 2048, 00:14:26.371 "data_size": 63488 00:14:26.371 } 00:14:26.371 ] 00:14:26.371 }' 00:14:26.371 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.371 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.631 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:26.631 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.631 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:26.631 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:26.631 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.631 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.631 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.631 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.631 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.631 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.631 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.631 "name": "raid_bdev1", 00:14:26.631 "uuid": "8828a873-64ca-4fe4-9a3b-9a03a628ce6e", 00:14:26.631 "strip_size_kb": 0, 00:14:26.631 "state": "online", 00:14:26.631 "raid_level": "raid1", 00:14:26.631 "superblock": true, 00:14:26.631 "num_base_bdevs": 2, 00:14:26.631 "num_base_bdevs_discovered": 1, 00:14:26.631 "num_base_bdevs_operational": 1, 00:14:26.631 "base_bdevs_list": [ 00:14:26.631 { 00:14:26.631 "name": null, 00:14:26.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.631 "is_configured": false, 00:14:26.631 "data_offset": 0, 00:14:26.631 "data_size": 63488 00:14:26.631 }, 00:14:26.631 { 00:14:26.631 "name": "BaseBdev2", 00:14:26.631 "uuid": "3c6b5da8-d15f-5af0-b5f5-689958a74293", 00:14:26.631 "is_configured": true, 00:14:26.631 "data_offset": 2048, 00:14:26.631 "data_size": 63488 00:14:26.631 } 00:14:26.631 ] 00:14:26.631 }' 00:14:26.631 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.631 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:26.631 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.891 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:26.891 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75821 00:14:26.891 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 75821 ']' 00:14:26.891 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 75821 00:14:26.891 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:26.891 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:26.891 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75821 00:14:26.891 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:26.891 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:26.891 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75821' 00:14:26.891 killing process with pid 75821 00:14:26.891 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 75821 00:14:26.891 Received shutdown signal, test time was about 60.000000 seconds 00:14:26.891 00:14:26.891 Latency(us) 00:14:26.891 [2024-11-05T11:30:26.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.891 [2024-11-05T11:30:26.165Z] =================================================================================================================== 00:14:26.891 [2024-11-05T11:30:26.165Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:26.891 [2024-11-05 11:30:25.962812] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:26.891 [2024-11-05 11:30:25.962943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.891 [2024-11-05 11:30:25.962991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:26.891 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 75821 00:14:26.891 [2024-11-05 11:30:25.963002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:27.151 [2024-11-05 11:30:26.253966] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.089 11:30:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:28.089 00:14:28.089 real 0m22.773s 00:14:28.089 user 0m28.209s 00:14:28.089 sys 0m3.350s 00:14:28.089 11:30:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:28.089 11:30:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.089 ************************************ 00:14:28.089 END TEST raid_rebuild_test_sb 00:14:28.089 ************************************ 00:14:28.348 11:30:27 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:14:28.348 11:30:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:28.348 11:30:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:28.348 11:30:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.348 ************************************ 00:14:28.348 START TEST raid_rebuild_test_io 00:14:28.348 ************************************ 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76549 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76549 00:14:28.348 11:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76549 ']' 00:14:28.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.349 11:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.349 11:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:28.349 11:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.349 11:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:28.349 11:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.349 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:28.349 Zero copy mechanism will not be used. 00:14:28.349 [2024-11-05 11:30:27.481532] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:14:28.349 [2024-11-05 11:30:27.481648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76549 ] 00:14:28.608 [2024-11-05 11:30:27.651945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.608 [2024-11-05 11:30:27.758929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.868 [2024-11-05 11:30:27.955419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.868 [2024-11-05 11:30:27.955512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.128 BaseBdev1_malloc 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.128 [2024-11-05 11:30:28.344501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:29.128 [2024-11-05 11:30:28.344630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.128 [2024-11-05 11:30:28.344672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:29.128 [2024-11-05 11:30:28.344703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.128 [2024-11-05 11:30:28.346756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.128 [2024-11-05 11:30:28.346854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:29.128 BaseBdev1 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.128 BaseBdev2_malloc 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.128 [2024-11-05 11:30:28.396977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:29.128 [2024-11-05 11:30:28.397070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.128 [2024-11-05 11:30:28.397121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:29.128 [2024-11-05 11:30:28.397163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.128 [2024-11-05 11:30:28.399164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.128 [2024-11-05 11:30:28.399237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:29.128 BaseBdev2 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:29.128 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.388 spare_malloc 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.388 spare_delay 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.388 [2024-11-05 11:30:28.471677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:29.388 [2024-11-05 11:30:28.471770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.388 [2024-11-05 11:30:28.471793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:29.388 [2024-11-05 11:30:28.471803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.388 [2024-11-05 11:30:28.473868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.388 [2024-11-05 11:30:28.473904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:29.388 spare 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.388 [2024-11-05 11:30:28.483708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.388 [2024-11-05 11:30:28.485403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.388 [2024-11-05 11:30:28.485487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:29.388 [2024-11-05 11:30:28.485499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:29.388 [2024-11-05 11:30:28.485730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:29.388 [2024-11-05 11:30:28.485881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:29.388 [2024-11-05 11:30:28.485891] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:29.388 [2024-11-05 11:30:28.486040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.388 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.389 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.389 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.389 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.389 "name": "raid_bdev1", 00:14:29.389 "uuid": "5a47df77-3ebb-47a0-8ac8-d54e4e8cda9d", 00:14:29.389 "strip_size_kb": 0, 00:14:29.389 "state": "online", 00:14:29.389 "raid_level": "raid1", 00:14:29.389 "superblock": false, 00:14:29.389 "num_base_bdevs": 2, 00:14:29.389 "num_base_bdevs_discovered": 2, 00:14:29.389 "num_base_bdevs_operational": 2, 00:14:29.389 "base_bdevs_list": [ 00:14:29.389 { 00:14:29.389 "name": "BaseBdev1", 00:14:29.389 "uuid": "985dc045-ed28-5546-bf4d-4c8ce5e185af", 00:14:29.389 "is_configured": true, 00:14:29.389 "data_offset": 0, 00:14:29.389 "data_size": 65536 00:14:29.389 }, 00:14:29.389 { 00:14:29.389 "name": "BaseBdev2", 00:14:29.389 "uuid": "13b59dbd-0e8e-590d-8caf-68ea449b7ef8", 00:14:29.389 "is_configured": true, 00:14:29.389 "data_offset": 0, 00:14:29.389 "data_size": 65536 00:14:29.389 } 00:14:29.389 ] 00:14:29.389 }' 00:14:29.389 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.389 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.958 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.958 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.958 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.958 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:29.958 [2024-11-05 11:30:28.947234] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.958 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.958 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:29.958 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:29.958 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.958 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.958 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.958 [2024-11-05 11:30:29.026773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.958 "name": "raid_bdev1", 00:14:29.958 "uuid": "5a47df77-3ebb-47a0-8ac8-d54e4e8cda9d", 00:14:29.958 "strip_size_kb": 0, 00:14:29.958 "state": "online", 00:14:29.958 "raid_level": "raid1", 00:14:29.958 "superblock": false, 00:14:29.958 "num_base_bdevs": 2, 00:14:29.958 "num_base_bdevs_discovered": 1, 00:14:29.958 "num_base_bdevs_operational": 1, 00:14:29.958 "base_bdevs_list": [ 00:14:29.958 { 00:14:29.958 "name": null, 00:14:29.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.958 "is_configured": false, 00:14:29.958 "data_offset": 0, 00:14:29.958 "data_size": 65536 00:14:29.958 }, 00:14:29.958 { 00:14:29.958 "name": "BaseBdev2", 00:14:29.958 "uuid": "13b59dbd-0e8e-590d-8caf-68ea449b7ef8", 00:14:29.958 "is_configured": true, 00:14:29.958 "data_offset": 0, 00:14:29.958 "data_size": 65536 00:14:29.958 } 00:14:29.958 ] 00:14:29.958 }' 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.958 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.958 [2024-11-05 11:30:29.122739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:29.958 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:29.958 Zero copy mechanism will not be used. 00:14:29.958 Running I/O for 60 seconds... 00:14:30.529 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:30.529 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.529 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.529 [2024-11-05 11:30:29.504598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:30.529 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.529 [2024-11-05 11:30:29.549721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:30.529 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:30.529 [2024-11-05 11:30:29.551681] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:30.529 [2024-11-05 11:30:29.659243] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:30.529 [2024-11-05 11:30:29.659881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:30.788 [2024-11-05 11:30:29.868059] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:30.788 [2024-11-05 11:30:29.868532] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:31.046 189.00 IOPS, 567.00 MiB/s [2024-11-05T11:30:30.320Z] [2024-11-05 11:30:30.194135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:31.046 [2024-11-05 11:30:30.313903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:31.046 [2024-11-05 11:30:30.314265] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:31.322 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.322 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.322 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.322 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.322 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.322 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.322 11:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.322 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.322 11:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.322 11:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.600 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.600 "name": "raid_bdev1", 00:14:31.600 "uuid": "5a47df77-3ebb-47a0-8ac8-d54e4e8cda9d", 00:14:31.600 "strip_size_kb": 0, 00:14:31.600 "state": "online", 00:14:31.600 "raid_level": "raid1", 00:14:31.600 "superblock": false, 00:14:31.600 "num_base_bdevs": 2, 00:14:31.600 "num_base_bdevs_discovered": 2, 00:14:31.600 "num_base_bdevs_operational": 2, 00:14:31.600 "process": { 00:14:31.600 "type": "rebuild", 00:14:31.600 "target": "spare", 00:14:31.600 "progress": { 00:14:31.600 "blocks": 12288, 00:14:31.600 "percent": 18 00:14:31.600 } 00:14:31.600 }, 00:14:31.600 "base_bdevs_list": [ 00:14:31.600 { 00:14:31.600 "name": "spare", 00:14:31.600 "uuid": "4fed807b-ef56-52fb-a24b-653d97ea8ed3", 00:14:31.600 "is_configured": true, 00:14:31.600 "data_offset": 0, 00:14:31.600 "data_size": 65536 00:14:31.600 }, 00:14:31.600 { 00:14:31.600 "name": "BaseBdev2", 00:14:31.600 "uuid": "13b59dbd-0e8e-590d-8caf-68ea449b7ef8", 00:14:31.600 "is_configured": true, 00:14:31.600 "data_offset": 0, 00:14:31.600 "data_size": 65536 00:14:31.600 } 00:14:31.600 ] 00:14:31.600 }' 00:14:31.600 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.600 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.600 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.600 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.600 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:31.600 11:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.600 11:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.600 [2024-11-05 11:30:30.693790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.600 [2024-11-05 11:30:30.777544] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:31.600 [2024-11-05 11:30:30.777809] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:31.860 [2024-11-05 11:30:30.884469] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:31.860 [2024-11-05 11:30:30.897559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.860 [2024-11-05 11:30:30.897599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.860 [2024-11-05 11:30:30.897610] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:31.860 [2024-11-05 11:30:30.938603] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.860 11:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.860 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.860 "name": "raid_bdev1", 00:14:31.860 "uuid": "5a47df77-3ebb-47a0-8ac8-d54e4e8cda9d", 00:14:31.860 "strip_size_kb": 0, 00:14:31.860 "state": "online", 00:14:31.860 "raid_level": "raid1", 00:14:31.860 "superblock": false, 00:14:31.860 "num_base_bdevs": 2, 00:14:31.860 "num_base_bdevs_discovered": 1, 00:14:31.860 "num_base_bdevs_operational": 1, 00:14:31.860 "base_bdevs_list": [ 00:14:31.860 { 00:14:31.860 "name": null, 00:14:31.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.860 "is_configured": false, 00:14:31.860 "data_offset": 0, 00:14:31.860 "data_size": 65536 00:14:31.860 }, 00:14:31.860 { 00:14:31.860 "name": "BaseBdev2", 00:14:31.860 "uuid": "13b59dbd-0e8e-590d-8caf-68ea449b7ef8", 00:14:31.860 "is_configured": true, 00:14:31.860 "data_offset": 0, 00:14:31.860 "data_size": 65536 00:14:31.860 } 00:14:31.860 ] 00:14:31.860 }' 00:14:31.860 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.860 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.430 140.50 IOPS, 421.50 MiB/s [2024-11-05T11:30:31.704Z] 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.430 "name": "raid_bdev1", 00:14:32.430 "uuid": "5a47df77-3ebb-47a0-8ac8-d54e4e8cda9d", 00:14:32.430 "strip_size_kb": 0, 00:14:32.430 "state": "online", 00:14:32.430 "raid_level": "raid1", 00:14:32.430 "superblock": false, 00:14:32.430 "num_base_bdevs": 2, 00:14:32.430 "num_base_bdevs_discovered": 1, 00:14:32.430 "num_base_bdevs_operational": 1, 00:14:32.430 "base_bdevs_list": [ 00:14:32.430 { 00:14:32.430 "name": null, 00:14:32.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.430 "is_configured": false, 00:14:32.430 "data_offset": 0, 00:14:32.430 "data_size": 65536 00:14:32.430 }, 00:14:32.430 { 00:14:32.430 "name": "BaseBdev2", 00:14:32.430 "uuid": "13b59dbd-0e8e-590d-8caf-68ea449b7ef8", 00:14:32.430 "is_configured": true, 00:14:32.430 "data_offset": 0, 00:14:32.430 "data_size": 65536 00:14:32.430 } 00:14:32.430 ] 00:14:32.430 }' 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.430 [2024-11-05 11:30:31.561781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.430 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:32.430 [2024-11-05 11:30:31.605571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:32.430 [2024-11-05 11:30:31.607424] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:32.690 [2024-11-05 11:30:31.729516] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:32.690 [2024-11-05 11:30:31.870409] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:32.950 178.33 IOPS, 535.00 MiB/s [2024-11-05T11:30:32.224Z] [2024-11-05 11:30:32.211416] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:32.950 [2024-11-05 11:30:32.211939] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:33.209 [2024-11-05 11:30:32.325976] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:33.209 [2024-11-05 11:30:32.326340] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:33.468 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.468 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.468 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.468 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.468 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.468 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.468 11:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.468 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.468 11:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.468 11:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.468 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.468 "name": "raid_bdev1", 00:14:33.468 "uuid": "5a47df77-3ebb-47a0-8ac8-d54e4e8cda9d", 00:14:33.468 "strip_size_kb": 0, 00:14:33.468 "state": "online", 00:14:33.468 "raid_level": "raid1", 00:14:33.468 "superblock": false, 00:14:33.468 "num_base_bdevs": 2, 00:14:33.468 "num_base_bdevs_discovered": 2, 00:14:33.468 "num_base_bdevs_operational": 2, 00:14:33.468 "process": { 00:14:33.468 "type": "rebuild", 00:14:33.468 "target": "spare", 00:14:33.468 "progress": { 00:14:33.468 "blocks": 12288, 00:14:33.468 "percent": 18 00:14:33.468 } 00:14:33.468 }, 00:14:33.469 "base_bdevs_list": [ 00:14:33.469 { 00:14:33.469 "name": "spare", 00:14:33.469 "uuid": "4fed807b-ef56-52fb-a24b-653d97ea8ed3", 00:14:33.469 "is_configured": true, 00:14:33.469 "data_offset": 0, 00:14:33.469 "data_size": 65536 00:14:33.469 }, 00:14:33.469 { 00:14:33.469 "name": "BaseBdev2", 00:14:33.469 "uuid": "13b59dbd-0e8e-590d-8caf-68ea449b7ef8", 00:14:33.469 "is_configured": true, 00:14:33.469 "data_offset": 0, 00:14:33.469 "data_size": 65536 00:14:33.469 } 00:14:33.469 ] 00:14:33.469 }' 00:14:33.469 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.469 [2024-11-05 11:30:32.663282] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:33.469 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.469 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=402 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.728 11:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.728 [2024-11-05 11:30:32.788783] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:33.729 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.729 "name": "raid_bdev1", 00:14:33.729 "uuid": "5a47df77-3ebb-47a0-8ac8-d54e4e8cda9d", 00:14:33.729 "strip_size_kb": 0, 00:14:33.729 "state": "online", 00:14:33.729 "raid_level": "raid1", 00:14:33.729 "superblock": false, 00:14:33.729 "num_base_bdevs": 2, 00:14:33.729 "num_base_bdevs_discovered": 2, 00:14:33.729 "num_base_bdevs_operational": 2, 00:14:33.729 "process": { 00:14:33.729 "type": "rebuild", 00:14:33.729 "target": "spare", 00:14:33.729 "progress": { 00:14:33.729 "blocks": 14336, 00:14:33.729 "percent": 21 00:14:33.729 } 00:14:33.729 }, 00:14:33.729 "base_bdevs_list": [ 00:14:33.729 { 00:14:33.729 "name": "spare", 00:14:33.729 "uuid": "4fed807b-ef56-52fb-a24b-653d97ea8ed3", 00:14:33.729 "is_configured": true, 00:14:33.729 "data_offset": 0, 00:14:33.729 "data_size": 65536 00:14:33.729 }, 00:14:33.729 { 00:14:33.729 "name": "BaseBdev2", 00:14:33.729 "uuid": "13b59dbd-0e8e-590d-8caf-68ea449b7ef8", 00:14:33.729 "is_configured": true, 00:14:33.729 "data_offset": 0, 00:14:33.729 "data_size": 65536 00:14:33.729 } 00:14:33.729 ] 00:14:33.729 }' 00:14:33.729 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.729 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.729 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.729 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.729 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:34.247 152.00 IOPS, 456.00 MiB/s [2024-11-05T11:30:33.521Z] [2024-11-05 11:30:33.463023] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:34.507 [2024-11-05 11:30:33.685461] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:34.766 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.766 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.766 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.766 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.766 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.766 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.766 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.766 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.767 11:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.767 11:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.767 11:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.767 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.767 "name": "raid_bdev1", 00:14:34.767 "uuid": "5a47df77-3ebb-47a0-8ac8-d54e4e8cda9d", 00:14:34.767 "strip_size_kb": 0, 00:14:34.767 "state": "online", 00:14:34.767 "raid_level": "raid1", 00:14:34.767 "superblock": false, 00:14:34.767 "num_base_bdevs": 2, 00:14:34.767 "num_base_bdevs_discovered": 2, 00:14:34.767 "num_base_bdevs_operational": 2, 00:14:34.767 "process": { 00:14:34.767 "type": "rebuild", 00:14:34.767 "target": "spare", 00:14:34.767 "progress": { 00:14:34.767 "blocks": 28672, 00:14:34.767 "percent": 43 00:14:34.767 } 00:14:34.767 }, 00:14:34.767 "base_bdevs_list": [ 00:14:34.767 { 00:14:34.767 "name": "spare", 00:14:34.767 "uuid": "4fed807b-ef56-52fb-a24b-653d97ea8ed3", 00:14:34.767 "is_configured": true, 00:14:34.767 "data_offset": 0, 00:14:34.767 "data_size": 65536 00:14:34.767 }, 00:14:34.767 { 00:14:34.767 "name": "BaseBdev2", 00:14:34.767 "uuid": "13b59dbd-0e8e-590d-8caf-68ea449b7ef8", 00:14:34.767 "is_configured": true, 00:14:34.767 "data_offset": 0, 00:14:34.767 "data_size": 65536 00:14:34.767 } 00:14:34.767 ] 00:14:34.767 }' 00:14:34.767 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.767 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.767 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.767 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.767 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:35.965 131.60 IOPS, 394.80 MiB/s [2024-11-05T11:30:35.239Z] 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.965 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.965 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.966 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.966 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.966 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.966 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.966 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.966 11:30:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.966 11:30:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.966 11:30:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.966 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.966 "name": "raid_bdev1", 00:14:35.966 "uuid": "5a47df77-3ebb-47a0-8ac8-d54e4e8cda9d", 00:14:35.966 "strip_size_kb": 0, 00:14:35.966 "state": "online", 00:14:35.966 "raid_level": "raid1", 00:14:35.966 "superblock": false, 00:14:35.966 "num_base_bdevs": 2, 00:14:35.966 "num_base_bdevs_discovered": 2, 00:14:35.966 "num_base_bdevs_operational": 2, 00:14:35.966 "process": { 00:14:35.966 "type": "rebuild", 00:14:35.966 "target": "spare", 00:14:35.966 "progress": { 00:14:35.966 "blocks": 49152, 00:14:35.966 "percent": 75 00:14:35.966 } 00:14:35.966 }, 00:14:35.966 "base_bdevs_list": [ 00:14:35.966 { 00:14:35.966 "name": "spare", 00:14:35.966 "uuid": "4fed807b-ef56-52fb-a24b-653d97ea8ed3", 00:14:35.966 "is_configured": true, 00:14:35.966 "data_offset": 0, 00:14:35.966 "data_size": 65536 00:14:35.966 }, 00:14:35.966 { 00:14:35.966 "name": "BaseBdev2", 00:14:35.966 "uuid": "13b59dbd-0e8e-590d-8caf-68ea449b7ef8", 00:14:35.966 "is_configured": true, 00:14:35.966 "data_offset": 0, 00:14:35.966 "data_size": 65536 00:14:35.966 } 00:14:35.966 ] 00:14:35.966 }' 00:14:35.966 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.966 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.966 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.966 116.50 IOPS, 349.50 MiB/s [2024-11-05T11:30:35.240Z] 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.966 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:36.226 [2024-11-05 11:30:35.344420] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:36.485 [2024-11-05 11:30:35.559355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:36.744 [2024-11-05 11:30:36.001243] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:37.004 [2024-11-05 11:30:36.106788] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:37.004 [2024-11-05 11:30:36.109136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.004 104.00 IOPS, 312.00 MiB/s [2024-11-05T11:30:36.278Z] 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.004 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.004 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.004 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.004 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.004 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.004 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.004 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.004 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.004 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.004 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.004 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.004 "name": "raid_bdev1", 00:14:37.004 "uuid": "5a47df77-3ebb-47a0-8ac8-d54e4e8cda9d", 00:14:37.004 "strip_size_kb": 0, 00:14:37.004 "state": "online", 00:14:37.004 "raid_level": "raid1", 00:14:37.004 "superblock": false, 00:14:37.004 "num_base_bdevs": 2, 00:14:37.004 "num_base_bdevs_discovered": 2, 00:14:37.004 "num_base_bdevs_operational": 2, 00:14:37.004 "base_bdevs_list": [ 00:14:37.004 { 00:14:37.004 "name": "spare", 00:14:37.004 "uuid": "4fed807b-ef56-52fb-a24b-653d97ea8ed3", 00:14:37.004 "is_configured": true, 00:14:37.004 "data_offset": 0, 00:14:37.004 "data_size": 65536 00:14:37.004 }, 00:14:37.004 { 00:14:37.004 "name": "BaseBdev2", 00:14:37.004 "uuid": "13b59dbd-0e8e-590d-8caf-68ea449b7ef8", 00:14:37.004 "is_configured": true, 00:14:37.004 "data_offset": 0, 00:14:37.004 "data_size": 65536 00:14:37.004 } 00:14:37.004 ] 00:14:37.004 }' 00:14:37.004 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.004 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:37.004 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.264 "name": "raid_bdev1", 00:14:37.264 "uuid": "5a47df77-3ebb-47a0-8ac8-d54e4e8cda9d", 00:14:37.264 "strip_size_kb": 0, 00:14:37.264 "state": "online", 00:14:37.264 "raid_level": "raid1", 00:14:37.264 "superblock": false, 00:14:37.264 "num_base_bdevs": 2, 00:14:37.264 "num_base_bdevs_discovered": 2, 00:14:37.264 "num_base_bdevs_operational": 2, 00:14:37.264 "base_bdevs_list": [ 00:14:37.264 { 00:14:37.264 "name": "spare", 00:14:37.264 "uuid": "4fed807b-ef56-52fb-a24b-653d97ea8ed3", 00:14:37.264 "is_configured": true, 00:14:37.264 "data_offset": 0, 00:14:37.264 "data_size": 65536 00:14:37.264 }, 00:14:37.264 { 00:14:37.264 "name": "BaseBdev2", 00:14:37.264 "uuid": "13b59dbd-0e8e-590d-8caf-68ea449b7ef8", 00:14:37.264 "is_configured": true, 00:14:37.264 "data_offset": 0, 00:14:37.264 "data_size": 65536 00:14:37.264 } 00:14:37.264 ] 00:14:37.264 }' 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.264 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.264 "name": "raid_bdev1", 00:14:37.264 "uuid": "5a47df77-3ebb-47a0-8ac8-d54e4e8cda9d", 00:14:37.264 "strip_size_kb": 0, 00:14:37.264 "state": "online", 00:14:37.264 "raid_level": "raid1", 00:14:37.264 "superblock": false, 00:14:37.264 "num_base_bdevs": 2, 00:14:37.264 "num_base_bdevs_discovered": 2, 00:14:37.264 "num_base_bdevs_operational": 2, 00:14:37.264 "base_bdevs_list": [ 00:14:37.264 { 00:14:37.264 "name": "spare", 00:14:37.264 "uuid": "4fed807b-ef56-52fb-a24b-653d97ea8ed3", 00:14:37.264 "is_configured": true, 00:14:37.264 "data_offset": 0, 00:14:37.264 "data_size": 65536 00:14:37.264 }, 00:14:37.264 { 00:14:37.264 "name": "BaseBdev2", 00:14:37.265 "uuid": "13b59dbd-0e8e-590d-8caf-68ea449b7ef8", 00:14:37.265 "is_configured": true, 00:14:37.265 "data_offset": 0, 00:14:37.265 "data_size": 65536 00:14:37.265 } 00:14:37.265 ] 00:14:37.265 }' 00:14:37.265 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.265 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.834 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:37.834 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.834 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.834 [2024-11-05 11:30:36.945859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.834 [2024-11-05 11:30:36.945939] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.834 00:14:37.834 Latency(us) 00:14:37.834 [2024-11-05T11:30:37.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.834 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:37.834 raid_bdev1 : 7.86 95.42 286.26 0.00 0.00 14414.82 300.49 114931.26 00:14:37.834 [2024-11-05T11:30:37.108Z] =================================================================================================================== 00:14:37.834 [2024-11-05T11:30:37.108Z] Total : 95.42 286.26 0.00 0.00 14414.82 300.49 114931.26 00:14:37.834 [2024-11-05 11:30:36.990886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.834 [2024-11-05 11:30:36.990973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.834 [2024-11-05 11:30:36.991092] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.834 { 00:14:37.834 "results": [ 00:14:37.834 { 00:14:37.834 "job": "raid_bdev1", 00:14:37.834 "core_mask": "0x1", 00:14:37.834 "workload": "randrw", 00:14:37.834 "percentage": 50, 00:14:37.834 "status": "finished", 00:14:37.834 "queue_depth": 2, 00:14:37.834 "io_size": 3145728, 00:14:37.834 "runtime": 7.860068, 00:14:37.834 "iops": 95.41902182016746, 00:14:37.834 "mibps": 286.25706546050236, 00:14:37.834 "io_failed": 0, 00:14:37.834 "io_timeout": 0, 00:14:37.834 "avg_latency_us": 14414.818487336244, 00:14:37.834 "min_latency_us": 300.49257641921395, 00:14:37.834 "max_latency_us": 114931.2558951965 00:14:37.834 } 00:14:37.834 ], 00:14:37.834 "core_count": 1 00:14:37.834 } 00:14:37.834 [2024-11-05 11:30:36.991172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:37.834 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.834 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.834 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:37.834 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.834 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.834 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.834 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:37.834 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:37.834 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:37.834 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:37.834 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.834 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:37.834 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:37.834 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:37.834 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:37.834 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:37.834 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:37.834 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:37.835 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:38.094 /dev/nbd0 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.094 1+0 records in 00:14:38.094 1+0 records out 00:14:38.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382669 s, 10.7 MB/s 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.094 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:38.095 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:38.095 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:38.095 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.095 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:38.095 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:38.095 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:38.095 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:38.095 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:38.095 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:38.095 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.095 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:38.354 /dev/nbd1 00:14:38.354 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:38.354 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:38.354 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:38.354 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:14:38.354 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:38.354 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:38.354 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:38.354 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:14:38.355 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:38.355 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:38.355 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.355 1+0 records in 00:14:38.355 1+0 records out 00:14:38.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530071 s, 7.7 MB/s 00:14:38.355 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.355 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:14:38.355 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.355 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:38.355 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:14:38.355 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.355 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.355 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:38.615 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:38.615 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.615 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:38.615 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:38.615 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:38.615 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.615 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:38.879 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:38.879 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:38.879 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:38.879 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.879 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.879 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:38.879 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:38.879 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.880 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:38.880 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.880 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:38.880 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:38.880 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:38.880 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.880 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76549 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76549 ']' 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76549 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76549 00:14:39.138 killing process with pid 76549 00:14:39.138 Received shutdown signal, test time was about 9.109973 seconds 00:14:39.138 00:14:39.138 Latency(us) 00:14:39.138 [2024-11-05T11:30:38.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.138 [2024-11-05T11:30:38.412Z] =================================================================================================================== 00:14:39.138 [2024-11-05T11:30:38.412Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76549' 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76549 00:14:39.138 [2024-11-05 11:30:38.217227] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.138 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76549 00:14:39.396 [2024-11-05 11:30:38.437167] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:40.775 00:14:40.775 real 0m12.241s 00:14:40.775 user 0m15.401s 00:14:40.775 sys 0m1.494s 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:40.775 ************************************ 00:14:40.775 END TEST raid_rebuild_test_io 00:14:40.775 ************************************ 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.775 11:30:39 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:14:40.775 11:30:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:40.775 11:30:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:40.775 11:30:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:40.775 ************************************ 00:14:40.775 START TEST raid_rebuild_test_sb_io 00:14:40.775 ************************************ 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76925 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76925 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 76925 ']' 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:40.775 11:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.775 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:40.775 Zero copy mechanism will not be used. 00:14:40.775 [2024-11-05 11:30:39.794547] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:14:40.775 [2024-11-05 11:30:39.794679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76925 ] 00:14:40.775 [2024-11-05 11:30:39.965693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.035 [2024-11-05 11:30:40.076425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.035 [2024-11-05 11:30:40.275722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.035 [2024-11-05 11:30:40.275754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.604 BaseBdev1_malloc 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.604 [2024-11-05 11:30:40.656797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:41.604 [2024-11-05 11:30:40.656905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.604 [2024-11-05 11:30:40.656930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:41.604 [2024-11-05 11:30:40.656940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.604 [2024-11-05 11:30:40.659053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.604 [2024-11-05 11:30:40.659094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:41.604 BaseBdev1 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.604 BaseBdev2_malloc 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.604 [2024-11-05 11:30:40.709682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:41.604 [2024-11-05 11:30:40.709738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.604 [2024-11-05 11:30:40.709771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:41.604 [2024-11-05 11:30:40.709783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.604 [2024-11-05 11:30:40.711916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.604 [2024-11-05 11:30:40.711956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:41.604 BaseBdev2 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.604 spare_malloc 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.604 spare_delay 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.604 [2024-11-05 11:30:40.786159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:41.604 [2024-11-05 11:30:40.786212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.604 [2024-11-05 11:30:40.786231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:41.604 [2024-11-05 11:30:40.786241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.604 [2024-11-05 11:30:40.788327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.604 [2024-11-05 11:30:40.788368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:41.604 spare 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.604 [2024-11-05 11:30:40.798194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.604 [2024-11-05 11:30:40.799933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:41.604 [2024-11-05 11:30:40.800097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:41.604 [2024-11-05 11:30:40.800115] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:41.604 [2024-11-05 11:30:40.800364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:41.604 [2024-11-05 11:30:40.800530] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:41.604 [2024-11-05 11:30:40.800548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:41.604 [2024-11-05 11:30:40.800696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.604 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.604 "name": "raid_bdev1", 00:14:41.604 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:41.604 "strip_size_kb": 0, 00:14:41.604 "state": "online", 00:14:41.604 "raid_level": "raid1", 00:14:41.604 "superblock": true, 00:14:41.604 "num_base_bdevs": 2, 00:14:41.604 "num_base_bdevs_discovered": 2, 00:14:41.604 "num_base_bdevs_operational": 2, 00:14:41.604 "base_bdevs_list": [ 00:14:41.604 { 00:14:41.604 "name": "BaseBdev1", 00:14:41.604 "uuid": "fd8889f8-b8bd-5298-ae36-5343ce29b1b8", 00:14:41.604 "is_configured": true, 00:14:41.604 "data_offset": 2048, 00:14:41.604 "data_size": 63488 00:14:41.605 }, 00:14:41.605 { 00:14:41.605 "name": "BaseBdev2", 00:14:41.605 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:41.605 "is_configured": true, 00:14:41.605 "data_offset": 2048, 00:14:41.605 "data_size": 63488 00:14:41.605 } 00:14:41.605 ] 00:14:41.605 }' 00:14:41.605 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.605 11:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.171 [2024-11-05 11:30:41.249730] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.171 [2024-11-05 11:30:41.333274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.171 "name": "raid_bdev1", 00:14:42.171 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:42.171 "strip_size_kb": 0, 00:14:42.171 "state": "online", 00:14:42.171 "raid_level": "raid1", 00:14:42.171 "superblock": true, 00:14:42.171 "num_base_bdevs": 2, 00:14:42.171 "num_base_bdevs_discovered": 1, 00:14:42.171 "num_base_bdevs_operational": 1, 00:14:42.171 "base_bdevs_list": [ 00:14:42.171 { 00:14:42.171 "name": null, 00:14:42.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.171 "is_configured": false, 00:14:42.171 "data_offset": 0, 00:14:42.171 "data_size": 63488 00:14:42.171 }, 00:14:42.171 { 00:14:42.171 "name": "BaseBdev2", 00:14:42.171 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:42.171 "is_configured": true, 00:14:42.171 "data_offset": 2048, 00:14:42.171 "data_size": 63488 00:14:42.171 } 00:14:42.171 ] 00:14:42.171 }' 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.171 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.171 [2024-11-05 11:30:41.436934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:42.171 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:42.171 Zero copy mechanism will not be used. 00:14:42.171 Running I/O for 60 seconds... 00:14:42.740 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:42.740 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.740 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.740 [2024-11-05 11:30:41.804492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:42.740 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.740 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:42.740 [2024-11-05 11:30:41.871845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:42.740 [2024-11-05 11:30:41.873856] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:42.740 [2024-11-05 11:30:41.981677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:42.740 [2024-11-05 11:30:41.982355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:43.000 [2024-11-05 11:30:42.109772] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:43.000 [2024-11-05 11:30:42.110393] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:43.259 [2024-11-05 11:30:42.373481] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:43.518 187.00 IOPS, 561.00 MiB/s [2024-11-05T11:30:42.792Z] [2024-11-05 11:30:42.601467] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:43.518 [2024-11-05 11:30:42.602023] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:43.778 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.778 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.778 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.778 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.778 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.778 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.778 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.778 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.778 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.778 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.778 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.778 "name": "raid_bdev1", 00:14:43.778 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:43.778 "strip_size_kb": 0, 00:14:43.778 "state": "online", 00:14:43.778 "raid_level": "raid1", 00:14:43.778 "superblock": true, 00:14:43.778 "num_base_bdevs": 2, 00:14:43.778 "num_base_bdevs_discovered": 2, 00:14:43.778 "num_base_bdevs_operational": 2, 00:14:43.778 "process": { 00:14:43.778 "type": "rebuild", 00:14:43.778 "target": "spare", 00:14:43.778 "progress": { 00:14:43.778 "blocks": 12288, 00:14:43.778 "percent": 19 00:14:43.778 } 00:14:43.778 }, 00:14:43.778 "base_bdevs_list": [ 00:14:43.778 { 00:14:43.778 "name": "spare", 00:14:43.778 "uuid": "5eb1be5a-45e2-509b-8552-d369855f9e25", 00:14:43.778 "is_configured": true, 00:14:43.778 "data_offset": 2048, 00:14:43.778 "data_size": 63488 00:14:43.778 }, 00:14:43.778 { 00:14:43.778 "name": "BaseBdev2", 00:14:43.778 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:43.778 "is_configured": true, 00:14:43.778 "data_offset": 2048, 00:14:43.778 "data_size": 63488 00:14:43.778 } 00:14:43.778 ] 00:14:43.778 }' 00:14:43.779 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.779 [2024-11-05 11:30:42.945986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:43.779 [2024-11-05 11:30:42.946816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:43.779 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.779 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.779 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.779 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:43.779 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.779 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.779 [2024-11-05 11:30:43.012996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.039 [2024-11-05 11:30:43.062647] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:44.039 [2024-11-05 11:30:43.070113] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:44.039 [2024-11-05 11:30:43.072108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.039 [2024-11-05 11:30:43.072151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.039 [2024-11-05 11:30:43.072167] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:44.039 [2024-11-05 11:30:43.106975] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.039 "name": "raid_bdev1", 00:14:44.039 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:44.039 "strip_size_kb": 0, 00:14:44.039 "state": "online", 00:14:44.039 "raid_level": "raid1", 00:14:44.039 "superblock": true, 00:14:44.039 "num_base_bdevs": 2, 00:14:44.039 "num_base_bdevs_discovered": 1, 00:14:44.039 "num_base_bdevs_operational": 1, 00:14:44.039 "base_bdevs_list": [ 00:14:44.039 { 00:14:44.039 "name": null, 00:14:44.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.039 "is_configured": false, 00:14:44.039 "data_offset": 0, 00:14:44.039 "data_size": 63488 00:14:44.039 }, 00:14:44.039 { 00:14:44.039 "name": "BaseBdev2", 00:14:44.039 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:44.039 "is_configured": true, 00:14:44.039 "data_offset": 2048, 00:14:44.039 "data_size": 63488 00:14:44.039 } 00:14:44.039 ] 00:14:44.039 }' 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.039 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.558 154.50 IOPS, 463.50 MiB/s [2024-11-05T11:30:43.832Z] 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.558 "name": "raid_bdev1", 00:14:44.558 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:44.558 "strip_size_kb": 0, 00:14:44.558 "state": "online", 00:14:44.558 "raid_level": "raid1", 00:14:44.558 "superblock": true, 00:14:44.558 "num_base_bdevs": 2, 00:14:44.558 "num_base_bdevs_discovered": 1, 00:14:44.558 "num_base_bdevs_operational": 1, 00:14:44.558 "base_bdevs_list": [ 00:14:44.558 { 00:14:44.558 "name": null, 00:14:44.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.558 "is_configured": false, 00:14:44.558 "data_offset": 0, 00:14:44.558 "data_size": 63488 00:14:44.558 }, 00:14:44.558 { 00:14:44.558 "name": "BaseBdev2", 00:14:44.558 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:44.558 "is_configured": true, 00:14:44.558 "data_offset": 2048, 00:14:44.558 "data_size": 63488 00:14:44.558 } 00:14:44.558 ] 00:14:44.558 }' 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.558 [2024-11-05 11:30:43.746811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.558 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:44.558 [2024-11-05 11:30:43.818532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:44.558 [2024-11-05 11:30:43.820907] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:44.818 [2024-11-05 11:30:43.930088] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:44.818 [2024-11-05 11:30:43.931203] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:45.077 [2024-11-05 11:30:44.148641] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:45.077 [2024-11-05 11:30:44.149268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:45.595 156.67 IOPS, 470.00 MiB/s [2024-11-05T11:30:44.869Z] [2024-11-05 11:30:44.635596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:45.595 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.595 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.595 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.595 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.595 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.595 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.595 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.595 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.595 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.595 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.595 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.595 "name": "raid_bdev1", 00:14:45.595 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:45.595 "strip_size_kb": 0, 00:14:45.595 "state": "online", 00:14:45.595 "raid_level": "raid1", 00:14:45.595 "superblock": true, 00:14:45.595 "num_base_bdevs": 2, 00:14:45.595 "num_base_bdevs_discovered": 2, 00:14:45.595 "num_base_bdevs_operational": 2, 00:14:45.595 "process": { 00:14:45.595 "type": "rebuild", 00:14:45.595 "target": "spare", 00:14:45.595 "progress": { 00:14:45.595 "blocks": 10240, 00:14:45.595 "percent": 16 00:14:45.595 } 00:14:45.595 }, 00:14:45.595 "base_bdevs_list": [ 00:14:45.595 { 00:14:45.595 "name": "spare", 00:14:45.595 "uuid": "5eb1be5a-45e2-509b-8552-d369855f9e25", 00:14:45.595 "is_configured": true, 00:14:45.595 "data_offset": 2048, 00:14:45.595 "data_size": 63488 00:14:45.595 }, 00:14:45.595 { 00:14:45.595 "name": "BaseBdev2", 00:14:45.595 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:45.595 "is_configured": true, 00:14:45.595 "data_offset": 2048, 00:14:45.595 "data_size": 63488 00:14:45.595 } 00:14:45.595 ] 00:14:45.595 }' 00:14:45.595 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:45.862 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=414 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.862 "name": "raid_bdev1", 00:14:45.862 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:45.862 "strip_size_kb": 0, 00:14:45.862 "state": "online", 00:14:45.862 "raid_level": "raid1", 00:14:45.862 "superblock": true, 00:14:45.862 "num_base_bdevs": 2, 00:14:45.862 "num_base_bdevs_discovered": 2, 00:14:45.862 "num_base_bdevs_operational": 2, 00:14:45.862 "process": { 00:14:45.862 "type": "rebuild", 00:14:45.862 "target": "spare", 00:14:45.862 "progress": { 00:14:45.862 "blocks": 12288, 00:14:45.862 "percent": 19 00:14:45.862 } 00:14:45.862 }, 00:14:45.862 "base_bdevs_list": [ 00:14:45.862 { 00:14:45.862 "name": "spare", 00:14:45.862 "uuid": "5eb1be5a-45e2-509b-8552-d369855f9e25", 00:14:45.862 "is_configured": true, 00:14:45.862 "data_offset": 2048, 00:14:45.862 "data_size": 63488 00:14:45.862 }, 00:14:45.862 { 00:14:45.862 "name": "BaseBdev2", 00:14:45.862 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:45.862 "is_configured": true, 00:14:45.862 "data_offset": 2048, 00:14:45.862 "data_size": 63488 00:14:45.862 } 00:14:45.862 ] 00:14:45.862 }' 00:14:45.862 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.862 [2024-11-05 11:30:44.958475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:45.862 [2024-11-05 11:30:44.959243] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:45.862 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.862 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.862 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.862 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:46.137 [2024-11-05 11:30:45.184958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:46.396 139.50 IOPS, 418.50 MiB/s [2024-11-05T11:30:45.670Z] [2024-11-05 11:30:45.514516] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:46.396 [2024-11-05 11:30:45.515049] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:46.655 [2024-11-05 11:30:45.834389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:46.913 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.913 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.913 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.913 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.913 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.913 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.913 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.913 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.913 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.913 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.913 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.913 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.913 "name": "raid_bdev1", 00:14:46.913 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:46.913 "strip_size_kb": 0, 00:14:46.913 "state": "online", 00:14:46.913 "raid_level": "raid1", 00:14:46.913 "superblock": true, 00:14:46.913 "num_base_bdevs": 2, 00:14:46.913 "num_base_bdevs_discovered": 2, 00:14:46.913 "num_base_bdevs_operational": 2, 00:14:46.913 "process": { 00:14:46.913 "type": "rebuild", 00:14:46.913 "target": "spare", 00:14:46.913 "progress": { 00:14:46.913 "blocks": 30720, 00:14:46.913 "percent": 48 00:14:46.913 } 00:14:46.913 }, 00:14:46.913 "base_bdevs_list": [ 00:14:46.913 { 00:14:46.913 "name": "spare", 00:14:46.913 "uuid": "5eb1be5a-45e2-509b-8552-d369855f9e25", 00:14:46.913 "is_configured": true, 00:14:46.913 "data_offset": 2048, 00:14:46.913 "data_size": 63488 00:14:46.913 }, 00:14:46.913 { 00:14:46.913 "name": "BaseBdev2", 00:14:46.913 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:46.913 "is_configured": true, 00:14:46.913 "data_offset": 2048, 00:14:46.913 "data_size": 63488 00:14:46.913 } 00:14:46.913 ] 00:14:46.913 }' 00:14:46.913 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.913 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.913 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.913 [2024-11-05 11:30:46.160113] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:47.172 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.172 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:47.172 [2024-11-05 11:30:46.262347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:47.740 123.00 IOPS, 369.00 MiB/s [2024-11-05T11:30:47.014Z] [2024-11-05 11:30:46.812500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:47.740 [2024-11-05 11:30:46.813464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:47.999 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:48.000 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.000 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.000 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.000 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.000 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.000 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.000 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.000 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.000 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.000 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.000 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.000 "name": "raid_bdev1", 00:14:48.000 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:48.000 "strip_size_kb": 0, 00:14:48.000 "state": "online", 00:14:48.000 "raid_level": "raid1", 00:14:48.000 "superblock": true, 00:14:48.000 "num_base_bdevs": 2, 00:14:48.000 "num_base_bdevs_discovered": 2, 00:14:48.000 "num_base_bdevs_operational": 2, 00:14:48.000 "process": { 00:14:48.000 "type": "rebuild", 00:14:48.000 "target": "spare", 00:14:48.000 "progress": { 00:14:48.000 "blocks": 49152, 00:14:48.000 "percent": 77 00:14:48.000 } 00:14:48.000 }, 00:14:48.000 "base_bdevs_list": [ 00:14:48.000 { 00:14:48.000 "name": "spare", 00:14:48.000 "uuid": "5eb1be5a-45e2-509b-8552-d369855f9e25", 00:14:48.000 "is_configured": true, 00:14:48.000 "data_offset": 2048, 00:14:48.000 "data_size": 63488 00:14:48.000 }, 00:14:48.000 { 00:14:48.000 "name": "BaseBdev2", 00:14:48.000 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:48.000 "is_configured": true, 00:14:48.000 "data_offset": 2048, 00:14:48.000 "data_size": 63488 00:14:48.000 } 00:14:48.000 ] 00:14:48.000 }' 00:14:48.000 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.000 [2024-11-05 11:30:47.272318] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:48.259 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.259 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.259 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.259 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:48.828 108.33 IOPS, 325.00 MiB/s [2024-11-05T11:30:48.102Z] [2024-11-05 11:30:48.042607] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:49.088 [2024-11-05 11:30:48.142445] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:49.088 [2024-11-05 11:30:48.151268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.088 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.088 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.088 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.088 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.088 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.088 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.347 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.347 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.348 "name": "raid_bdev1", 00:14:49.348 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:49.348 "strip_size_kb": 0, 00:14:49.348 "state": "online", 00:14:49.348 "raid_level": "raid1", 00:14:49.348 "superblock": true, 00:14:49.348 "num_base_bdevs": 2, 00:14:49.348 "num_base_bdevs_discovered": 2, 00:14:49.348 "num_base_bdevs_operational": 2, 00:14:49.348 "base_bdevs_list": [ 00:14:49.348 { 00:14:49.348 "name": "spare", 00:14:49.348 "uuid": "5eb1be5a-45e2-509b-8552-d369855f9e25", 00:14:49.348 "is_configured": true, 00:14:49.348 "data_offset": 2048, 00:14:49.348 "data_size": 63488 00:14:49.348 }, 00:14:49.348 { 00:14:49.348 "name": "BaseBdev2", 00:14:49.348 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:49.348 "is_configured": true, 00:14:49.348 "data_offset": 2048, 00:14:49.348 "data_size": 63488 00:14:49.348 } 00:14:49.348 ] 00:14:49.348 }' 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.348 98.14 IOPS, 294.43 MiB/s [2024-11-05T11:30:48.622Z] 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.348 "name": "raid_bdev1", 00:14:49.348 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:49.348 "strip_size_kb": 0, 00:14:49.348 "state": "online", 00:14:49.348 "raid_level": "raid1", 00:14:49.348 "superblock": true, 00:14:49.348 "num_base_bdevs": 2, 00:14:49.348 "num_base_bdevs_discovered": 2, 00:14:49.348 "num_base_bdevs_operational": 2, 00:14:49.348 "base_bdevs_list": [ 00:14:49.348 { 00:14:49.348 "name": "spare", 00:14:49.348 "uuid": "5eb1be5a-45e2-509b-8552-d369855f9e25", 00:14:49.348 "is_configured": true, 00:14:49.348 "data_offset": 2048, 00:14:49.348 "data_size": 63488 00:14:49.348 }, 00:14:49.348 { 00:14:49.348 "name": "BaseBdev2", 00:14:49.348 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:49.348 "is_configured": true, 00:14:49.348 "data_offset": 2048, 00:14:49.348 "data_size": 63488 00:14:49.348 } 00:14:49.348 ] 00:14:49.348 }' 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.348 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.608 "name": "raid_bdev1", 00:14:49.608 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:49.608 "strip_size_kb": 0, 00:14:49.608 "state": "online", 00:14:49.608 "raid_level": "raid1", 00:14:49.608 "superblock": true, 00:14:49.608 "num_base_bdevs": 2, 00:14:49.608 "num_base_bdevs_discovered": 2, 00:14:49.608 "num_base_bdevs_operational": 2, 00:14:49.608 "base_bdevs_list": [ 00:14:49.608 { 00:14:49.608 "name": "spare", 00:14:49.608 "uuid": "5eb1be5a-45e2-509b-8552-d369855f9e25", 00:14:49.608 "is_configured": true, 00:14:49.608 "data_offset": 2048, 00:14:49.608 "data_size": 63488 00:14:49.608 }, 00:14:49.608 { 00:14:49.608 "name": "BaseBdev2", 00:14:49.608 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:49.608 "is_configured": true, 00:14:49.608 "data_offset": 2048, 00:14:49.608 "data_size": 63488 00:14:49.608 } 00:14:49.608 ] 00:14:49.608 }' 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.608 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.867 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:49.867 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.867 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.867 [2024-11-05 11:30:49.117296] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:49.867 [2024-11-05 11:30:49.117409] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:49.867 00:14:49.867 Latency(us) 00:14:49.867 [2024-11-05T11:30:49.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.867 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:49.867 raid_bdev1 : 7.71 91.66 274.99 0.00 0.00 14770.32 316.59 116304.94 00:14:49.867 [2024-11-05T11:30:49.141Z] =================================================================================================================== 00:14:49.867 [2024-11-05T11:30:49.141Z] Total : 91.66 274.99 0.00 0.00 14770.32 316.59 116304.94 00:14:50.127 [2024-11-05 11:30:49.161661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.127 [2024-11-05 11:30:49.161772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.127 [2024-11-05 11:30:49.161898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.127 [2024-11-05 11:30:49.161957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:50.127 { 00:14:50.127 "results": [ 00:14:50.127 { 00:14:50.127 "job": "raid_bdev1", 00:14:50.127 "core_mask": "0x1", 00:14:50.127 "workload": "randrw", 00:14:50.127 "percentage": 50, 00:14:50.127 "status": "finished", 00:14:50.127 "queue_depth": 2, 00:14:50.127 "io_size": 3145728, 00:14:50.127 "runtime": 7.712917, 00:14:50.127 "iops": 91.66441179128468, 00:14:50.127 "mibps": 274.993235373854, 00:14:50.127 "io_failed": 0, 00:14:50.127 "io_timeout": 0, 00:14:50.127 "avg_latency_us": 14770.315954614802, 00:14:50.127 "min_latency_us": 316.5903930131004, 00:14:50.127 "max_latency_us": 116304.93624454149 00:14:50.127 } 00:14:50.127 ], 00:14:50.127 "core_count": 1 00:14:50.127 } 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:50.127 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:50.387 /dev/nbd0 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.387 1+0 records in 00:14:50.387 1+0 records out 00:14:50.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536523 s, 7.6 MB/s 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:50.387 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:50.646 /dev/nbd1 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.646 1+0 records in 00:14:50.646 1+0 records out 00:14:50.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316593 s, 12.9 MB/s 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:50.646 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:50.905 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:50.905 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.905 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:50.905 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:50.905 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:50.905 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.905 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:50.905 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:50.905 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:50.905 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:50.905 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.905 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.905 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:50.905 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:50.905 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.905 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:50.905 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.905 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:50.905 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:50.905 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:50.905 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.905 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.165 [2024-11-05 11:30:50.400869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:51.165 [2024-11-05 11:30:50.400990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.165 [2024-11-05 11:30:50.401043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:51.165 [2024-11-05 11:30:50.401071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.165 [2024-11-05 11:30:50.403788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.165 [2024-11-05 11:30:50.403862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:51.165 [2024-11-05 11:30:50.403985] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:51.165 [2024-11-05 11:30:50.404061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:51.165 [2024-11-05 11:30:50.404271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.165 spare 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.165 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.432 [2024-11-05 11:30:50.504230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:51.432 [2024-11-05 11:30:50.504334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:51.432 [2024-11-05 11:30:50.504702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:51.432 [2024-11-05 11:30:50.504956] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:51.432 [2024-11-05 11:30:50.505003] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:51.432 [2024-11-05 11:30:50.505258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.432 "name": "raid_bdev1", 00:14:51.432 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:51.432 "strip_size_kb": 0, 00:14:51.432 "state": "online", 00:14:51.432 "raid_level": "raid1", 00:14:51.432 "superblock": true, 00:14:51.432 "num_base_bdevs": 2, 00:14:51.432 "num_base_bdevs_discovered": 2, 00:14:51.432 "num_base_bdevs_operational": 2, 00:14:51.432 "base_bdevs_list": [ 00:14:51.432 { 00:14:51.432 "name": "spare", 00:14:51.432 "uuid": "5eb1be5a-45e2-509b-8552-d369855f9e25", 00:14:51.432 "is_configured": true, 00:14:51.432 "data_offset": 2048, 00:14:51.432 "data_size": 63488 00:14:51.432 }, 00:14:51.432 { 00:14:51.432 "name": "BaseBdev2", 00:14:51.432 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:51.432 "is_configured": true, 00:14:51.432 "data_offset": 2048, 00:14:51.432 "data_size": 63488 00:14:51.432 } 00:14:51.432 ] 00:14:51.432 }' 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.432 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.702 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:51.702 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.702 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:51.702 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:51.702 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.702 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.702 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.702 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.702 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.702 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.962 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.962 "name": "raid_bdev1", 00:14:51.962 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:51.962 "strip_size_kb": 0, 00:14:51.962 "state": "online", 00:14:51.962 "raid_level": "raid1", 00:14:51.962 "superblock": true, 00:14:51.962 "num_base_bdevs": 2, 00:14:51.962 "num_base_bdevs_discovered": 2, 00:14:51.962 "num_base_bdevs_operational": 2, 00:14:51.962 "base_bdevs_list": [ 00:14:51.962 { 00:14:51.962 "name": "spare", 00:14:51.962 "uuid": "5eb1be5a-45e2-509b-8552-d369855f9e25", 00:14:51.962 "is_configured": true, 00:14:51.962 "data_offset": 2048, 00:14:51.962 "data_size": 63488 00:14:51.962 }, 00:14:51.962 { 00:14:51.962 "name": "BaseBdev2", 00:14:51.962 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:51.962 "is_configured": true, 00:14:51.962 "data_offset": 2048, 00:14:51.962 "data_size": 63488 00:14:51.962 } 00:14:51.962 ] 00:14:51.962 }' 00:14:51.962 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.962 [2024-11-05 11:30:51.128278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.962 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.962 "name": "raid_bdev1", 00:14:51.962 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:51.962 "strip_size_kb": 0, 00:14:51.962 "state": "online", 00:14:51.962 "raid_level": "raid1", 00:14:51.962 "superblock": true, 00:14:51.962 "num_base_bdevs": 2, 00:14:51.962 "num_base_bdevs_discovered": 1, 00:14:51.962 "num_base_bdevs_operational": 1, 00:14:51.962 "base_bdevs_list": [ 00:14:51.962 { 00:14:51.963 "name": null, 00:14:51.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.963 "is_configured": false, 00:14:51.963 "data_offset": 0, 00:14:51.963 "data_size": 63488 00:14:51.963 }, 00:14:51.963 { 00:14:51.963 "name": "BaseBdev2", 00:14:51.963 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:51.963 "is_configured": true, 00:14:51.963 "data_offset": 2048, 00:14:51.963 "data_size": 63488 00:14:51.963 } 00:14:51.963 ] 00:14:51.963 }' 00:14:51.963 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.963 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.532 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:52.532 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.532 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.532 [2024-11-05 11:30:51.635492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.532 [2024-11-05 11:30:51.635832] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:52.532 [2024-11-05 11:30:51.635898] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:52.532 [2024-11-05 11:30:51.635998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.532 [2024-11-05 11:30:51.654662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:14:52.532 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.532 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:52.532 [2024-11-05 11:30:51.656899] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:53.472 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.472 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.472 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.472 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.472 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.472 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.472 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.472 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.472 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.472 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.472 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.472 "name": "raid_bdev1", 00:14:53.472 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:53.472 "strip_size_kb": 0, 00:14:53.472 "state": "online", 00:14:53.472 "raid_level": "raid1", 00:14:53.472 "superblock": true, 00:14:53.472 "num_base_bdevs": 2, 00:14:53.472 "num_base_bdevs_discovered": 2, 00:14:53.472 "num_base_bdevs_operational": 2, 00:14:53.472 "process": { 00:14:53.472 "type": "rebuild", 00:14:53.472 "target": "spare", 00:14:53.472 "progress": { 00:14:53.472 "blocks": 20480, 00:14:53.472 "percent": 32 00:14:53.472 } 00:14:53.472 }, 00:14:53.472 "base_bdevs_list": [ 00:14:53.472 { 00:14:53.472 "name": "spare", 00:14:53.472 "uuid": "5eb1be5a-45e2-509b-8552-d369855f9e25", 00:14:53.472 "is_configured": true, 00:14:53.472 "data_offset": 2048, 00:14:53.472 "data_size": 63488 00:14:53.472 }, 00:14:53.472 { 00:14:53.472 "name": "BaseBdev2", 00:14:53.472 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:53.472 "is_configured": true, 00:14:53.472 "data_offset": 2048, 00:14:53.472 "data_size": 63488 00:14:53.472 } 00:14:53.472 ] 00:14:53.472 }' 00:14:53.472 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.732 [2024-11-05 11:30:52.808962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.732 [2024-11-05 11:30:52.866635] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:53.732 [2024-11-05 11:30:52.866705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.732 [2024-11-05 11:30:52.866721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.732 [2024-11-05 11:30:52.866735] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.732 "name": "raid_bdev1", 00:14:53.732 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:53.732 "strip_size_kb": 0, 00:14:53.732 "state": "online", 00:14:53.732 "raid_level": "raid1", 00:14:53.732 "superblock": true, 00:14:53.732 "num_base_bdevs": 2, 00:14:53.732 "num_base_bdevs_discovered": 1, 00:14:53.732 "num_base_bdevs_operational": 1, 00:14:53.732 "base_bdevs_list": [ 00:14:53.732 { 00:14:53.732 "name": null, 00:14:53.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.732 "is_configured": false, 00:14:53.732 "data_offset": 0, 00:14:53.732 "data_size": 63488 00:14:53.732 }, 00:14:53.732 { 00:14:53.732 "name": "BaseBdev2", 00:14:53.732 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:53.732 "is_configured": true, 00:14:53.732 "data_offset": 2048, 00:14:53.732 "data_size": 63488 00:14:53.732 } 00:14:53.732 ] 00:14:53.732 }' 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.732 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.301 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:54.301 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.301 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.301 [2024-11-05 11:30:53.387349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:54.301 [2024-11-05 11:30:53.387531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.301 [2024-11-05 11:30:53.387581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:54.301 [2024-11-05 11:30:53.387637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.301 [2024-11-05 11:30:53.388309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.301 [2024-11-05 11:30:53.388373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:54.301 [2024-11-05 11:30:53.388523] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:54.301 [2024-11-05 11:30:53.388570] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:54.301 [2024-11-05 11:30:53.388617] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:54.301 [2024-11-05 11:30:53.388683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:54.301 [2024-11-05 11:30:53.407641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:14:54.301 spare 00:14:54.301 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.301 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:54.301 [2024-11-05 11:30:53.410028] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:55.240 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.240 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.240 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.240 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.240 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.240 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.240 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.240 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.240 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.240 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.240 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.240 "name": "raid_bdev1", 00:14:55.240 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:55.240 "strip_size_kb": 0, 00:14:55.240 "state": "online", 00:14:55.240 "raid_level": "raid1", 00:14:55.240 "superblock": true, 00:14:55.240 "num_base_bdevs": 2, 00:14:55.240 "num_base_bdevs_discovered": 2, 00:14:55.240 "num_base_bdevs_operational": 2, 00:14:55.240 "process": { 00:14:55.240 "type": "rebuild", 00:14:55.240 "target": "spare", 00:14:55.240 "progress": { 00:14:55.240 "blocks": 20480, 00:14:55.240 "percent": 32 00:14:55.240 } 00:14:55.240 }, 00:14:55.240 "base_bdevs_list": [ 00:14:55.240 { 00:14:55.240 "name": "spare", 00:14:55.240 "uuid": "5eb1be5a-45e2-509b-8552-d369855f9e25", 00:14:55.240 "is_configured": true, 00:14:55.240 "data_offset": 2048, 00:14:55.240 "data_size": 63488 00:14:55.240 }, 00:14:55.240 { 00:14:55.240 "name": "BaseBdev2", 00:14:55.240 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:55.240 "is_configured": true, 00:14:55.240 "data_offset": 2048, 00:14:55.240 "data_size": 63488 00:14:55.240 } 00:14:55.240 ] 00:14:55.240 }' 00:14:55.240 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.500 [2024-11-05 11:30:54.562302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.500 [2024-11-05 11:30:54.619979] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:55.500 [2024-11-05 11:30:54.620154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.500 [2024-11-05 11:30:54.620183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.500 [2024-11-05 11:30:54.620191] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.500 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.501 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.501 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.501 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.501 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.501 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.501 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.501 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.501 "name": "raid_bdev1", 00:14:55.501 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:55.501 "strip_size_kb": 0, 00:14:55.501 "state": "online", 00:14:55.501 "raid_level": "raid1", 00:14:55.501 "superblock": true, 00:14:55.501 "num_base_bdevs": 2, 00:14:55.501 "num_base_bdevs_discovered": 1, 00:14:55.501 "num_base_bdevs_operational": 1, 00:14:55.501 "base_bdevs_list": [ 00:14:55.501 { 00:14:55.501 "name": null, 00:14:55.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.501 "is_configured": false, 00:14:55.501 "data_offset": 0, 00:14:55.501 "data_size": 63488 00:14:55.501 }, 00:14:55.501 { 00:14:55.501 "name": "BaseBdev2", 00:14:55.501 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:55.501 "is_configured": true, 00:14:55.501 "data_offset": 2048, 00:14:55.501 "data_size": 63488 00:14:55.501 } 00:14:55.501 ] 00:14:55.501 }' 00:14:55.501 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.501 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.071 "name": "raid_bdev1", 00:14:56.071 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:56.071 "strip_size_kb": 0, 00:14:56.071 "state": "online", 00:14:56.071 "raid_level": "raid1", 00:14:56.071 "superblock": true, 00:14:56.071 "num_base_bdevs": 2, 00:14:56.071 "num_base_bdevs_discovered": 1, 00:14:56.071 "num_base_bdevs_operational": 1, 00:14:56.071 "base_bdevs_list": [ 00:14:56.071 { 00:14:56.071 "name": null, 00:14:56.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.071 "is_configured": false, 00:14:56.071 "data_offset": 0, 00:14:56.071 "data_size": 63488 00:14:56.071 }, 00:14:56.071 { 00:14:56.071 "name": "BaseBdev2", 00:14:56.071 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:56.071 "is_configured": true, 00:14:56.071 "data_offset": 2048, 00:14:56.071 "data_size": 63488 00:14:56.071 } 00:14:56.071 ] 00:14:56.071 }' 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.071 [2024-11-05 11:30:55.189919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:56.071 [2024-11-05 11:30:55.189990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.071 [2024-11-05 11:30:55.190019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:56.071 [2024-11-05 11:30:55.190030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.071 [2024-11-05 11:30:55.190635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.071 [2024-11-05 11:30:55.190656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:56.071 [2024-11-05 11:30:55.190751] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:56.071 [2024-11-05 11:30:55.190769] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:56.071 [2024-11-05 11:30:55.190783] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:56.071 [2024-11-05 11:30:55.190797] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:56.071 BaseBdev1 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.071 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:57.011 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:57.011 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.011 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.012 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.012 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.012 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:57.012 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.012 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.012 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.012 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.012 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.012 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.012 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.012 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.012 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.012 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.012 "name": "raid_bdev1", 00:14:57.012 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:57.012 "strip_size_kb": 0, 00:14:57.012 "state": "online", 00:14:57.012 "raid_level": "raid1", 00:14:57.012 "superblock": true, 00:14:57.012 "num_base_bdevs": 2, 00:14:57.012 "num_base_bdevs_discovered": 1, 00:14:57.012 "num_base_bdevs_operational": 1, 00:14:57.012 "base_bdevs_list": [ 00:14:57.012 { 00:14:57.012 "name": null, 00:14:57.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.012 "is_configured": false, 00:14:57.012 "data_offset": 0, 00:14:57.012 "data_size": 63488 00:14:57.012 }, 00:14:57.012 { 00:14:57.012 "name": "BaseBdev2", 00:14:57.012 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:57.012 "is_configured": true, 00:14:57.012 "data_offset": 2048, 00:14:57.012 "data_size": 63488 00:14:57.012 } 00:14:57.012 ] 00:14:57.012 }' 00:14:57.012 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.012 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.582 "name": "raid_bdev1", 00:14:57.582 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:57.582 "strip_size_kb": 0, 00:14:57.582 "state": "online", 00:14:57.582 "raid_level": "raid1", 00:14:57.582 "superblock": true, 00:14:57.582 "num_base_bdevs": 2, 00:14:57.582 "num_base_bdevs_discovered": 1, 00:14:57.582 "num_base_bdevs_operational": 1, 00:14:57.582 "base_bdevs_list": [ 00:14:57.582 { 00:14:57.582 "name": null, 00:14:57.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.582 "is_configured": false, 00:14:57.582 "data_offset": 0, 00:14:57.582 "data_size": 63488 00:14:57.582 }, 00:14:57.582 { 00:14:57.582 "name": "BaseBdev2", 00:14:57.582 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:57.582 "is_configured": true, 00:14:57.582 "data_offset": 2048, 00:14:57.582 "data_size": 63488 00:14:57.582 } 00:14:57.582 ] 00:14:57.582 }' 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.582 [2024-11-05 11:30:56.827370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.582 [2024-11-05 11:30:56.827630] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:57.582 [2024-11-05 11:30:56.827697] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:57.582 request: 00:14:57.582 { 00:14:57.582 "base_bdev": "BaseBdev1", 00:14:57.582 "raid_bdev": "raid_bdev1", 00:14:57.582 "method": "bdev_raid_add_base_bdev", 00:14:57.582 "req_id": 1 00:14:57.582 } 00:14:57.582 Got JSON-RPC error response 00:14:57.582 response: 00:14:57.582 { 00:14:57.582 "code": -22, 00:14:57.582 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:57.582 } 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:57.582 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.962 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.962 "name": "raid_bdev1", 00:14:58.962 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:58.962 "strip_size_kb": 0, 00:14:58.962 "state": "online", 00:14:58.963 "raid_level": "raid1", 00:14:58.963 "superblock": true, 00:14:58.963 "num_base_bdevs": 2, 00:14:58.963 "num_base_bdevs_discovered": 1, 00:14:58.963 "num_base_bdevs_operational": 1, 00:14:58.963 "base_bdevs_list": [ 00:14:58.963 { 00:14:58.963 "name": null, 00:14:58.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.963 "is_configured": false, 00:14:58.963 "data_offset": 0, 00:14:58.963 "data_size": 63488 00:14:58.963 }, 00:14:58.963 { 00:14:58.963 "name": "BaseBdev2", 00:14:58.963 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:58.963 "is_configured": true, 00:14:58.963 "data_offset": 2048, 00:14:58.963 "data_size": 63488 00:14:58.963 } 00:14:58.963 ] 00:14:58.963 }' 00:14:58.963 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.963 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.223 "name": "raid_bdev1", 00:14:59.223 "uuid": "e9a3bca5-7a7d-4620-a3d9-3e3001cc3fe5", 00:14:59.223 "strip_size_kb": 0, 00:14:59.223 "state": "online", 00:14:59.223 "raid_level": "raid1", 00:14:59.223 "superblock": true, 00:14:59.223 "num_base_bdevs": 2, 00:14:59.223 "num_base_bdevs_discovered": 1, 00:14:59.223 "num_base_bdevs_operational": 1, 00:14:59.223 "base_bdevs_list": [ 00:14:59.223 { 00:14:59.223 "name": null, 00:14:59.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.223 "is_configured": false, 00:14:59.223 "data_offset": 0, 00:14:59.223 "data_size": 63488 00:14:59.223 }, 00:14:59.223 { 00:14:59.223 "name": "BaseBdev2", 00:14:59.223 "uuid": "260d1be3-dfa1-5e31-a815-fb17d5cd2cca", 00:14:59.223 "is_configured": true, 00:14:59.223 "data_offset": 2048, 00:14:59.223 "data_size": 63488 00:14:59.223 } 00:14:59.223 ] 00:14:59.223 }' 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76925 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 76925 ']' 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 76925 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76925 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76925' 00:14:59.223 killing process with pid 76925 00:14:59.223 Received shutdown signal, test time was about 17.009036 seconds 00:14:59.223 00:14:59.223 Latency(us) 00:14:59.223 [2024-11-05T11:30:58.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.223 [2024-11-05T11:30:58.497Z] =================================================================================================================== 00:14:59.223 [2024-11-05T11:30:58.497Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 76925 00:14:59.223 [2024-11-05 11:30:58.415355] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.223 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 76925 00:14:59.223 [2024-11-05 11:30:58.415536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.223 [2024-11-05 11:30:58.415602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.223 [2024-11-05 11:30:58.415616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:59.483 [2024-11-05 11:30:58.664299] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.864 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:00.864 00:15:00.864 real 0m20.222s 00:15:00.864 user 0m26.442s 00:15:00.864 sys 0m2.257s 00:15:00.864 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:00.864 ************************************ 00:15:00.865 END TEST raid_rebuild_test_sb_io 00:15:00.865 ************************************ 00:15:00.865 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.865 11:30:59 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:00.865 11:30:59 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:15:00.865 11:30:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:00.865 11:30:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:00.865 11:30:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:00.865 ************************************ 00:15:00.865 START TEST raid_rebuild_test 00:15:00.865 ************************************ 00:15:00.865 11:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:15:00.865 11:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:00.865 11:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:00.865 11:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:00.865 11:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:00.865 11:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:00.865 11:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:00.865 11:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.865 11:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:00.865 11:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77615 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77615 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 77615 ']' 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:00.865 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.865 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:00.865 Zero copy mechanism will not be used. 00:15:00.865 [2024-11-05 11:31:00.096204] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:15:00.865 [2024-11-05 11:31:00.096318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77615 ] 00:15:01.125 [2024-11-05 11:31:00.270486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.384 [2024-11-05 11:31:00.407373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.384 [2024-11-05 11:31:00.642040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.384 [2024-11-05 11:31:00.642105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.644 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:01.644 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:15:01.644 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.644 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:01.644 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.644 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.903 BaseBdev1_malloc 00:15:01.903 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.903 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:01.903 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.903 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.903 [2024-11-05 11:31:00.972659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:01.903 [2024-11-05 11:31:00.972741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.903 [2024-11-05 11:31:00.972771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:01.903 [2024-11-05 11:31:00.972784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.903 [2024-11-05 11:31:00.975217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.903 [2024-11-05 11:31:00.975251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:01.903 BaseBdev1 00:15:01.903 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.903 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.904 11:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:01.904 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.904 11:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.904 BaseBdev2_malloc 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.904 [2024-11-05 11:31:01.035398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:01.904 [2024-11-05 11:31:01.035555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.904 [2024-11-05 11:31:01.035582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:01.904 [2024-11-05 11:31:01.035595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.904 [2024-11-05 11:31:01.038043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.904 [2024-11-05 11:31:01.038082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:01.904 BaseBdev2 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.904 BaseBdev3_malloc 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.904 [2024-11-05 11:31:01.113210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:01.904 [2024-11-05 11:31:01.113350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.904 [2024-11-05 11:31:01.113390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:01.904 [2024-11-05 11:31:01.113421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.904 [2024-11-05 11:31:01.115891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.904 [2024-11-05 11:31:01.115969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:01.904 BaseBdev3 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.904 BaseBdev4_malloc 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.904 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.904 [2024-11-05 11:31:01.175573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:01.904 [2024-11-05 11:31:01.175644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.904 [2024-11-05 11:31:01.175668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:01.904 [2024-11-05 11:31:01.175681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.164 [2024-11-05 11:31:01.178195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.164 [2024-11-05 11:31:01.178232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:02.164 BaseBdev4 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.164 spare_malloc 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.164 spare_delay 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.164 [2024-11-05 11:31:01.246408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:02.164 [2024-11-05 11:31:01.246476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.164 [2024-11-05 11:31:01.246495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:02.164 [2024-11-05 11:31:01.246506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.164 [2024-11-05 11:31:01.248843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.164 spare 00:15:02.164 [2024-11-05 11:31:01.248952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.164 [2024-11-05 11:31:01.254450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.164 [2024-11-05 11:31:01.256553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.164 [2024-11-05 11:31:01.256658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.164 [2024-11-05 11:31:01.256744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:02.164 [2024-11-05 11:31:01.256859] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:02.164 [2024-11-05 11:31:01.256902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:02.164 [2024-11-05 11:31:01.257178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:02.164 [2024-11-05 11:31:01.257389] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:02.164 [2024-11-05 11:31:01.257434] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:02.164 [2024-11-05 11:31:01.257626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.164 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.165 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.165 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.165 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.165 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.165 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.165 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.165 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.165 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.165 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.165 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.165 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.165 "name": "raid_bdev1", 00:15:02.165 "uuid": "2d7c6577-5a34-45ce-8e11-3145bd9a8416", 00:15:02.165 "strip_size_kb": 0, 00:15:02.165 "state": "online", 00:15:02.165 "raid_level": "raid1", 00:15:02.165 "superblock": false, 00:15:02.165 "num_base_bdevs": 4, 00:15:02.165 "num_base_bdevs_discovered": 4, 00:15:02.165 "num_base_bdevs_operational": 4, 00:15:02.165 "base_bdevs_list": [ 00:15:02.165 { 00:15:02.165 "name": "BaseBdev1", 00:15:02.165 "uuid": "19cf6b05-5349-55c9-8c37-32d59d603302", 00:15:02.165 "is_configured": true, 00:15:02.165 "data_offset": 0, 00:15:02.165 "data_size": 65536 00:15:02.165 }, 00:15:02.165 { 00:15:02.165 "name": "BaseBdev2", 00:15:02.165 "uuid": "401355d4-c8cd-59ee-baec-375053de25b4", 00:15:02.165 "is_configured": true, 00:15:02.165 "data_offset": 0, 00:15:02.165 "data_size": 65536 00:15:02.165 }, 00:15:02.165 { 00:15:02.165 "name": "BaseBdev3", 00:15:02.165 "uuid": "6b1ae928-dd47-5c44-a95a-80e4151ffca6", 00:15:02.165 "is_configured": true, 00:15:02.165 "data_offset": 0, 00:15:02.165 "data_size": 65536 00:15:02.165 }, 00:15:02.165 { 00:15:02.165 "name": "BaseBdev4", 00:15:02.165 "uuid": "e2a5ab13-c161-5406-aa94-10c5f77feb38", 00:15:02.165 "is_configured": true, 00:15:02.165 "data_offset": 0, 00:15:02.165 "data_size": 65536 00:15:02.165 } 00:15:02.165 ] 00:15:02.165 }' 00:15:02.165 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.165 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.734 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.734 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.734 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.734 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:02.734 [2024-11-05 11:31:01.757986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.734 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.734 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:02.734 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:02.734 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:02.735 11:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:02.994 [2024-11-05 11:31:02.053196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:02.994 /dev/nbd0 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.994 1+0 records in 00:15:02.994 1+0 records out 00:15:02.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573045 s, 7.1 MB/s 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:02.994 11:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:09.567 65536+0 records in 00:15:09.567 65536+0 records out 00:15:09.567 33554432 bytes (34 MB, 32 MiB) copied, 5.77688 s, 5.8 MB/s 00:15:09.567 11:31:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:09.567 11:31:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.567 11:31:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:09.567 11:31:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:09.567 11:31:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:09.567 11:31:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.567 11:31:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:09.567 [2024-11-05 11:31:08.103110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.567 [2024-11-05 11:31:08.140186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.567 "name": "raid_bdev1", 00:15:09.567 "uuid": "2d7c6577-5a34-45ce-8e11-3145bd9a8416", 00:15:09.567 "strip_size_kb": 0, 00:15:09.567 "state": "online", 00:15:09.567 "raid_level": "raid1", 00:15:09.567 "superblock": false, 00:15:09.567 "num_base_bdevs": 4, 00:15:09.567 "num_base_bdevs_discovered": 3, 00:15:09.567 "num_base_bdevs_operational": 3, 00:15:09.567 "base_bdevs_list": [ 00:15:09.567 { 00:15:09.567 "name": null, 00:15:09.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.567 "is_configured": false, 00:15:09.567 "data_offset": 0, 00:15:09.567 "data_size": 65536 00:15:09.567 }, 00:15:09.567 { 00:15:09.567 "name": "BaseBdev2", 00:15:09.567 "uuid": "401355d4-c8cd-59ee-baec-375053de25b4", 00:15:09.567 "is_configured": true, 00:15:09.567 "data_offset": 0, 00:15:09.567 "data_size": 65536 00:15:09.567 }, 00:15:09.567 { 00:15:09.567 "name": "BaseBdev3", 00:15:09.567 "uuid": "6b1ae928-dd47-5c44-a95a-80e4151ffca6", 00:15:09.567 "is_configured": true, 00:15:09.567 "data_offset": 0, 00:15:09.567 "data_size": 65536 00:15:09.567 }, 00:15:09.567 { 00:15:09.567 "name": "BaseBdev4", 00:15:09.567 "uuid": "e2a5ab13-c161-5406-aa94-10c5f77feb38", 00:15:09.567 "is_configured": true, 00:15:09.567 "data_offset": 0, 00:15:09.567 "data_size": 65536 00:15:09.567 } 00:15:09.567 ] 00:15:09.567 }' 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.567 [2024-11-05 11:31:08.543476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.567 [2024-11-05 11:31:08.559715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.567 11:31:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:09.567 [2024-11-05 11:31:08.562000] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:10.505 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.505 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.506 "name": "raid_bdev1", 00:15:10.506 "uuid": "2d7c6577-5a34-45ce-8e11-3145bd9a8416", 00:15:10.506 "strip_size_kb": 0, 00:15:10.506 "state": "online", 00:15:10.506 "raid_level": "raid1", 00:15:10.506 "superblock": false, 00:15:10.506 "num_base_bdevs": 4, 00:15:10.506 "num_base_bdevs_discovered": 4, 00:15:10.506 "num_base_bdevs_operational": 4, 00:15:10.506 "process": { 00:15:10.506 "type": "rebuild", 00:15:10.506 "target": "spare", 00:15:10.506 "progress": { 00:15:10.506 "blocks": 20480, 00:15:10.506 "percent": 31 00:15:10.506 } 00:15:10.506 }, 00:15:10.506 "base_bdevs_list": [ 00:15:10.506 { 00:15:10.506 "name": "spare", 00:15:10.506 "uuid": "e308eba3-20fa-59aa-b875-e89b0b68cee0", 00:15:10.506 "is_configured": true, 00:15:10.506 "data_offset": 0, 00:15:10.506 "data_size": 65536 00:15:10.506 }, 00:15:10.506 { 00:15:10.506 "name": "BaseBdev2", 00:15:10.506 "uuid": "401355d4-c8cd-59ee-baec-375053de25b4", 00:15:10.506 "is_configured": true, 00:15:10.506 "data_offset": 0, 00:15:10.506 "data_size": 65536 00:15:10.506 }, 00:15:10.506 { 00:15:10.506 "name": "BaseBdev3", 00:15:10.506 "uuid": "6b1ae928-dd47-5c44-a95a-80e4151ffca6", 00:15:10.506 "is_configured": true, 00:15:10.506 "data_offset": 0, 00:15:10.506 "data_size": 65536 00:15:10.506 }, 00:15:10.506 { 00:15:10.506 "name": "BaseBdev4", 00:15:10.506 "uuid": "e2a5ab13-c161-5406-aa94-10c5f77feb38", 00:15:10.506 "is_configured": true, 00:15:10.506 "data_offset": 0, 00:15:10.506 "data_size": 65536 00:15:10.506 } 00:15:10.506 ] 00:15:10.506 }' 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.506 11:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.506 [2024-11-05 11:31:09.721202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.506 [2024-11-05 11:31:09.770782] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:10.506 [2024-11-05 11:31:09.770852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.506 [2024-11-05 11:31:09.770869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.506 [2024-11-05 11:31:09.770880] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.765 "name": "raid_bdev1", 00:15:10.765 "uuid": "2d7c6577-5a34-45ce-8e11-3145bd9a8416", 00:15:10.765 "strip_size_kb": 0, 00:15:10.765 "state": "online", 00:15:10.765 "raid_level": "raid1", 00:15:10.765 "superblock": false, 00:15:10.765 "num_base_bdevs": 4, 00:15:10.765 "num_base_bdevs_discovered": 3, 00:15:10.765 "num_base_bdevs_operational": 3, 00:15:10.765 "base_bdevs_list": [ 00:15:10.765 { 00:15:10.765 "name": null, 00:15:10.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.765 "is_configured": false, 00:15:10.765 "data_offset": 0, 00:15:10.765 "data_size": 65536 00:15:10.765 }, 00:15:10.765 { 00:15:10.765 "name": "BaseBdev2", 00:15:10.765 "uuid": "401355d4-c8cd-59ee-baec-375053de25b4", 00:15:10.765 "is_configured": true, 00:15:10.765 "data_offset": 0, 00:15:10.765 "data_size": 65536 00:15:10.765 }, 00:15:10.765 { 00:15:10.765 "name": "BaseBdev3", 00:15:10.765 "uuid": "6b1ae928-dd47-5c44-a95a-80e4151ffca6", 00:15:10.765 "is_configured": true, 00:15:10.765 "data_offset": 0, 00:15:10.765 "data_size": 65536 00:15:10.765 }, 00:15:10.765 { 00:15:10.765 "name": "BaseBdev4", 00:15:10.765 "uuid": "e2a5ab13-c161-5406-aa94-10c5f77feb38", 00:15:10.765 "is_configured": true, 00:15:10.765 "data_offset": 0, 00:15:10.765 "data_size": 65536 00:15:10.765 } 00:15:10.765 ] 00:15:10.765 }' 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.765 11:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.025 11:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:11.025 11:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.025 11:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:11.025 11:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:11.025 11:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.025 11:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.025 11:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.025 11:31:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.025 11:31:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.025 11:31:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.289 11:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.289 "name": "raid_bdev1", 00:15:11.289 "uuid": "2d7c6577-5a34-45ce-8e11-3145bd9a8416", 00:15:11.289 "strip_size_kb": 0, 00:15:11.289 "state": "online", 00:15:11.289 "raid_level": "raid1", 00:15:11.289 "superblock": false, 00:15:11.289 "num_base_bdevs": 4, 00:15:11.289 "num_base_bdevs_discovered": 3, 00:15:11.289 "num_base_bdevs_operational": 3, 00:15:11.289 "base_bdevs_list": [ 00:15:11.289 { 00:15:11.289 "name": null, 00:15:11.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.289 "is_configured": false, 00:15:11.289 "data_offset": 0, 00:15:11.289 "data_size": 65536 00:15:11.289 }, 00:15:11.289 { 00:15:11.289 "name": "BaseBdev2", 00:15:11.289 "uuid": "401355d4-c8cd-59ee-baec-375053de25b4", 00:15:11.289 "is_configured": true, 00:15:11.289 "data_offset": 0, 00:15:11.289 "data_size": 65536 00:15:11.289 }, 00:15:11.289 { 00:15:11.289 "name": "BaseBdev3", 00:15:11.289 "uuid": "6b1ae928-dd47-5c44-a95a-80e4151ffca6", 00:15:11.289 "is_configured": true, 00:15:11.289 "data_offset": 0, 00:15:11.289 "data_size": 65536 00:15:11.289 }, 00:15:11.289 { 00:15:11.289 "name": "BaseBdev4", 00:15:11.289 "uuid": "e2a5ab13-c161-5406-aa94-10c5f77feb38", 00:15:11.289 "is_configured": true, 00:15:11.289 "data_offset": 0, 00:15:11.289 "data_size": 65536 00:15:11.289 } 00:15:11.289 ] 00:15:11.289 }' 00:15:11.289 11:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.289 11:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:11.289 11:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.289 11:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.289 11:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:11.289 11:31:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.289 11:31:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.289 [2024-11-05 11:31:10.386717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.289 [2024-11-05 11:31:10.401432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:15:11.289 11:31:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.289 11:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:11.289 [2024-11-05 11:31:10.403553] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:12.224 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.224 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.224 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.224 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.224 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.224 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.224 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.225 11:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.225 11:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.225 11:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.225 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.225 "name": "raid_bdev1", 00:15:12.225 "uuid": "2d7c6577-5a34-45ce-8e11-3145bd9a8416", 00:15:12.225 "strip_size_kb": 0, 00:15:12.225 "state": "online", 00:15:12.225 "raid_level": "raid1", 00:15:12.225 "superblock": false, 00:15:12.225 "num_base_bdevs": 4, 00:15:12.225 "num_base_bdevs_discovered": 4, 00:15:12.225 "num_base_bdevs_operational": 4, 00:15:12.225 "process": { 00:15:12.225 "type": "rebuild", 00:15:12.225 "target": "spare", 00:15:12.225 "progress": { 00:15:12.225 "blocks": 20480, 00:15:12.225 "percent": 31 00:15:12.225 } 00:15:12.225 }, 00:15:12.225 "base_bdevs_list": [ 00:15:12.225 { 00:15:12.225 "name": "spare", 00:15:12.225 "uuid": "e308eba3-20fa-59aa-b875-e89b0b68cee0", 00:15:12.225 "is_configured": true, 00:15:12.225 "data_offset": 0, 00:15:12.225 "data_size": 65536 00:15:12.225 }, 00:15:12.225 { 00:15:12.225 "name": "BaseBdev2", 00:15:12.225 "uuid": "401355d4-c8cd-59ee-baec-375053de25b4", 00:15:12.225 "is_configured": true, 00:15:12.225 "data_offset": 0, 00:15:12.225 "data_size": 65536 00:15:12.225 }, 00:15:12.225 { 00:15:12.225 "name": "BaseBdev3", 00:15:12.225 "uuid": "6b1ae928-dd47-5c44-a95a-80e4151ffca6", 00:15:12.225 "is_configured": true, 00:15:12.225 "data_offset": 0, 00:15:12.225 "data_size": 65536 00:15:12.225 }, 00:15:12.225 { 00:15:12.225 "name": "BaseBdev4", 00:15:12.225 "uuid": "e2a5ab13-c161-5406-aa94-10c5f77feb38", 00:15:12.225 "is_configured": true, 00:15:12.225 "data_offset": 0, 00:15:12.225 "data_size": 65536 00:15:12.225 } 00:15:12.225 ] 00:15:12.225 }' 00:15:12.225 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.485 [2024-11-05 11:31:11.547369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:12.485 [2024-11-05 11:31:11.613033] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.485 "name": "raid_bdev1", 00:15:12.485 "uuid": "2d7c6577-5a34-45ce-8e11-3145bd9a8416", 00:15:12.485 "strip_size_kb": 0, 00:15:12.485 "state": "online", 00:15:12.485 "raid_level": "raid1", 00:15:12.485 "superblock": false, 00:15:12.485 "num_base_bdevs": 4, 00:15:12.485 "num_base_bdevs_discovered": 3, 00:15:12.485 "num_base_bdevs_operational": 3, 00:15:12.485 "process": { 00:15:12.485 "type": "rebuild", 00:15:12.485 "target": "spare", 00:15:12.485 "progress": { 00:15:12.485 "blocks": 24576, 00:15:12.485 "percent": 37 00:15:12.485 } 00:15:12.485 }, 00:15:12.485 "base_bdevs_list": [ 00:15:12.485 { 00:15:12.485 "name": "spare", 00:15:12.485 "uuid": "e308eba3-20fa-59aa-b875-e89b0b68cee0", 00:15:12.485 "is_configured": true, 00:15:12.485 "data_offset": 0, 00:15:12.485 "data_size": 65536 00:15:12.485 }, 00:15:12.485 { 00:15:12.485 "name": null, 00:15:12.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.485 "is_configured": false, 00:15:12.485 "data_offset": 0, 00:15:12.485 "data_size": 65536 00:15:12.485 }, 00:15:12.485 { 00:15:12.485 "name": "BaseBdev3", 00:15:12.485 "uuid": "6b1ae928-dd47-5c44-a95a-80e4151ffca6", 00:15:12.485 "is_configured": true, 00:15:12.485 "data_offset": 0, 00:15:12.485 "data_size": 65536 00:15:12.485 }, 00:15:12.485 { 00:15:12.485 "name": "BaseBdev4", 00:15:12.485 "uuid": "e2a5ab13-c161-5406-aa94-10c5f77feb38", 00:15:12.485 "is_configured": true, 00:15:12.485 "data_offset": 0, 00:15:12.485 "data_size": 65536 00:15:12.485 } 00:15:12.485 ] 00:15:12.485 }' 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=441 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.485 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.745 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.745 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.745 11:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.745 11:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.745 11:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.745 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.745 "name": "raid_bdev1", 00:15:12.745 "uuid": "2d7c6577-5a34-45ce-8e11-3145bd9a8416", 00:15:12.745 "strip_size_kb": 0, 00:15:12.745 "state": "online", 00:15:12.745 "raid_level": "raid1", 00:15:12.745 "superblock": false, 00:15:12.745 "num_base_bdevs": 4, 00:15:12.745 "num_base_bdevs_discovered": 3, 00:15:12.745 "num_base_bdevs_operational": 3, 00:15:12.745 "process": { 00:15:12.745 "type": "rebuild", 00:15:12.745 "target": "spare", 00:15:12.745 "progress": { 00:15:12.745 "blocks": 26624, 00:15:12.745 "percent": 40 00:15:12.745 } 00:15:12.745 }, 00:15:12.745 "base_bdevs_list": [ 00:15:12.745 { 00:15:12.745 "name": "spare", 00:15:12.745 "uuid": "e308eba3-20fa-59aa-b875-e89b0b68cee0", 00:15:12.745 "is_configured": true, 00:15:12.745 "data_offset": 0, 00:15:12.745 "data_size": 65536 00:15:12.745 }, 00:15:12.745 { 00:15:12.745 "name": null, 00:15:12.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.745 "is_configured": false, 00:15:12.745 "data_offset": 0, 00:15:12.745 "data_size": 65536 00:15:12.745 }, 00:15:12.745 { 00:15:12.745 "name": "BaseBdev3", 00:15:12.745 "uuid": "6b1ae928-dd47-5c44-a95a-80e4151ffca6", 00:15:12.745 "is_configured": true, 00:15:12.745 "data_offset": 0, 00:15:12.745 "data_size": 65536 00:15:12.745 }, 00:15:12.745 { 00:15:12.745 "name": "BaseBdev4", 00:15:12.745 "uuid": "e2a5ab13-c161-5406-aa94-10c5f77feb38", 00:15:12.745 "is_configured": true, 00:15:12.745 "data_offset": 0, 00:15:12.745 "data_size": 65536 00:15:12.745 } 00:15:12.745 ] 00:15:12.745 }' 00:15:12.745 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.745 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.745 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.745 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.745 11:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.684 11:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.684 11:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.684 11:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.684 11:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.684 11:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.684 11:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.684 11:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.684 11:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.684 11:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.684 11:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.684 11:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.684 11:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.684 "name": "raid_bdev1", 00:15:13.684 "uuid": "2d7c6577-5a34-45ce-8e11-3145bd9a8416", 00:15:13.684 "strip_size_kb": 0, 00:15:13.684 "state": "online", 00:15:13.684 "raid_level": "raid1", 00:15:13.684 "superblock": false, 00:15:13.684 "num_base_bdevs": 4, 00:15:13.684 "num_base_bdevs_discovered": 3, 00:15:13.684 "num_base_bdevs_operational": 3, 00:15:13.684 "process": { 00:15:13.684 "type": "rebuild", 00:15:13.684 "target": "spare", 00:15:13.684 "progress": { 00:15:13.684 "blocks": 49152, 00:15:13.684 "percent": 75 00:15:13.684 } 00:15:13.684 }, 00:15:13.684 "base_bdevs_list": [ 00:15:13.684 { 00:15:13.684 "name": "spare", 00:15:13.684 "uuid": "e308eba3-20fa-59aa-b875-e89b0b68cee0", 00:15:13.684 "is_configured": true, 00:15:13.684 "data_offset": 0, 00:15:13.684 "data_size": 65536 00:15:13.684 }, 00:15:13.684 { 00:15:13.684 "name": null, 00:15:13.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.684 "is_configured": false, 00:15:13.684 "data_offset": 0, 00:15:13.684 "data_size": 65536 00:15:13.684 }, 00:15:13.684 { 00:15:13.684 "name": "BaseBdev3", 00:15:13.684 "uuid": "6b1ae928-dd47-5c44-a95a-80e4151ffca6", 00:15:13.684 "is_configured": true, 00:15:13.684 "data_offset": 0, 00:15:13.684 "data_size": 65536 00:15:13.684 }, 00:15:13.684 { 00:15:13.684 "name": "BaseBdev4", 00:15:13.684 "uuid": "e2a5ab13-c161-5406-aa94-10c5f77feb38", 00:15:13.684 "is_configured": true, 00:15:13.684 "data_offset": 0, 00:15:13.684 "data_size": 65536 00:15:13.684 } 00:15:13.684 ] 00:15:13.684 }' 00:15:13.684 11:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.944 11:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.944 11:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.944 11:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.944 11:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.514 [2024-11-05 11:31:13.628281] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:14.514 [2024-11-05 11:31:13.628460] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:14.514 [2024-11-05 11:31:13.628513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.774 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.774 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.774 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.774 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.774 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.774 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.774 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.774 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.774 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.774 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.774 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.035 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.035 "name": "raid_bdev1", 00:15:15.035 "uuid": "2d7c6577-5a34-45ce-8e11-3145bd9a8416", 00:15:15.035 "strip_size_kb": 0, 00:15:15.035 "state": "online", 00:15:15.035 "raid_level": "raid1", 00:15:15.035 "superblock": false, 00:15:15.035 "num_base_bdevs": 4, 00:15:15.035 "num_base_bdevs_discovered": 3, 00:15:15.035 "num_base_bdevs_operational": 3, 00:15:15.035 "base_bdevs_list": [ 00:15:15.035 { 00:15:15.035 "name": "spare", 00:15:15.035 "uuid": "e308eba3-20fa-59aa-b875-e89b0b68cee0", 00:15:15.035 "is_configured": true, 00:15:15.035 "data_offset": 0, 00:15:15.035 "data_size": 65536 00:15:15.035 }, 00:15:15.035 { 00:15:15.035 "name": null, 00:15:15.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.035 "is_configured": false, 00:15:15.035 "data_offset": 0, 00:15:15.035 "data_size": 65536 00:15:15.035 }, 00:15:15.035 { 00:15:15.035 "name": "BaseBdev3", 00:15:15.035 "uuid": "6b1ae928-dd47-5c44-a95a-80e4151ffca6", 00:15:15.035 "is_configured": true, 00:15:15.035 "data_offset": 0, 00:15:15.035 "data_size": 65536 00:15:15.035 }, 00:15:15.035 { 00:15:15.035 "name": "BaseBdev4", 00:15:15.035 "uuid": "e2a5ab13-c161-5406-aa94-10c5f77feb38", 00:15:15.035 "is_configured": true, 00:15:15.035 "data_offset": 0, 00:15:15.035 "data_size": 65536 00:15:15.035 } 00:15:15.035 ] 00:15:15.036 }' 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.036 "name": "raid_bdev1", 00:15:15.036 "uuid": "2d7c6577-5a34-45ce-8e11-3145bd9a8416", 00:15:15.036 "strip_size_kb": 0, 00:15:15.036 "state": "online", 00:15:15.036 "raid_level": "raid1", 00:15:15.036 "superblock": false, 00:15:15.036 "num_base_bdevs": 4, 00:15:15.036 "num_base_bdevs_discovered": 3, 00:15:15.036 "num_base_bdevs_operational": 3, 00:15:15.036 "base_bdevs_list": [ 00:15:15.036 { 00:15:15.036 "name": "spare", 00:15:15.036 "uuid": "e308eba3-20fa-59aa-b875-e89b0b68cee0", 00:15:15.036 "is_configured": true, 00:15:15.036 "data_offset": 0, 00:15:15.036 "data_size": 65536 00:15:15.036 }, 00:15:15.036 { 00:15:15.036 "name": null, 00:15:15.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.036 "is_configured": false, 00:15:15.036 "data_offset": 0, 00:15:15.036 "data_size": 65536 00:15:15.036 }, 00:15:15.036 { 00:15:15.036 "name": "BaseBdev3", 00:15:15.036 "uuid": "6b1ae928-dd47-5c44-a95a-80e4151ffca6", 00:15:15.036 "is_configured": true, 00:15:15.036 "data_offset": 0, 00:15:15.036 "data_size": 65536 00:15:15.036 }, 00:15:15.036 { 00:15:15.036 "name": "BaseBdev4", 00:15:15.036 "uuid": "e2a5ab13-c161-5406-aa94-10c5f77feb38", 00:15:15.036 "is_configured": true, 00:15:15.036 "data_offset": 0, 00:15:15.036 "data_size": 65536 00:15:15.036 } 00:15:15.036 ] 00:15:15.036 }' 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.036 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.296 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.296 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.296 "name": "raid_bdev1", 00:15:15.296 "uuid": "2d7c6577-5a34-45ce-8e11-3145bd9a8416", 00:15:15.296 "strip_size_kb": 0, 00:15:15.296 "state": "online", 00:15:15.296 "raid_level": "raid1", 00:15:15.296 "superblock": false, 00:15:15.296 "num_base_bdevs": 4, 00:15:15.296 "num_base_bdevs_discovered": 3, 00:15:15.296 "num_base_bdevs_operational": 3, 00:15:15.296 "base_bdevs_list": [ 00:15:15.296 { 00:15:15.296 "name": "spare", 00:15:15.296 "uuid": "e308eba3-20fa-59aa-b875-e89b0b68cee0", 00:15:15.296 "is_configured": true, 00:15:15.296 "data_offset": 0, 00:15:15.296 "data_size": 65536 00:15:15.296 }, 00:15:15.296 { 00:15:15.296 "name": null, 00:15:15.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.296 "is_configured": false, 00:15:15.296 "data_offset": 0, 00:15:15.296 "data_size": 65536 00:15:15.296 }, 00:15:15.296 { 00:15:15.296 "name": "BaseBdev3", 00:15:15.296 "uuid": "6b1ae928-dd47-5c44-a95a-80e4151ffca6", 00:15:15.296 "is_configured": true, 00:15:15.296 "data_offset": 0, 00:15:15.296 "data_size": 65536 00:15:15.296 }, 00:15:15.296 { 00:15:15.296 "name": "BaseBdev4", 00:15:15.296 "uuid": "e2a5ab13-c161-5406-aa94-10c5f77feb38", 00:15:15.296 "is_configured": true, 00:15:15.296 "data_offset": 0, 00:15:15.296 "data_size": 65536 00:15:15.296 } 00:15:15.296 ] 00:15:15.296 }' 00:15:15.296 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.296 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.556 [2024-11-05 11:31:14.675870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:15.556 [2024-11-05 11:31:14.676013] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:15.556 [2024-11-05 11:31:14.676137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.556 [2024-11-05 11:31:14.676231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.556 [2024-11-05 11:31:14.676243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:15.556 11:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:15.557 11:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:15.557 11:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:15.557 11:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:15.816 /dev/nbd0 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.816 1+0 records in 00:15:15.816 1+0 records out 00:15:15.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021505 s, 19.0 MB/s 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:15.816 11:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:15.817 11:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:15.817 11:31:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:16.076 /dev/nbd1 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.076 1+0 records in 00:15:16.076 1+0 records out 00:15:16.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361317 s, 11.3 MB/s 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:16.076 11:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.336 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:16.596 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:16.596 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:16.596 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:16.596 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.596 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.596 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:16.596 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:16.596 11:31:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.596 11:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:16.596 11:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77615 00:15:16.597 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 77615 ']' 00:15:16.597 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 77615 00:15:16.597 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:15:16.597 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:16.597 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77615 00:15:16.597 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:16.597 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:16.597 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77615' 00:15:16.597 killing process with pid 77615 00:15:16.597 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 77615 00:15:16.597 Received shutdown signal, test time was about 60.000000 seconds 00:15:16.597 00:15:16.597 Latency(us) 00:15:16.597 [2024-11-05T11:31:15.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.597 [2024-11-05T11:31:15.871Z] =================================================================================================================== 00:15:16.597 [2024-11-05T11:31:15.871Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:16.597 11:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 77615 00:15:16.597 [2024-11-05 11:31:15.846714] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.166 [2024-11-05 11:31:16.320683] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.116 11:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:18.116 00:15:18.116 real 0m17.375s 00:15:18.116 user 0m18.799s 00:15:18.116 sys 0m3.335s 00:15:18.116 ************************************ 00:15:18.116 END TEST raid_rebuild_test 00:15:18.116 ************************************ 00:15:18.116 11:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:18.116 11:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.376 11:31:17 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:18.376 11:31:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:18.376 11:31:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:18.376 11:31:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.376 ************************************ 00:15:18.376 START TEST raid_rebuild_test_sb 00:15:18.376 ************************************ 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78061 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78061 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78061 ']' 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:18.376 11:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.376 [2024-11-05 11:31:17.548797] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:15:18.376 [2024-11-05 11:31:17.549049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78061 ] 00:15:18.376 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:18.376 Zero copy mechanism will not be used. 00:15:18.636 [2024-11-05 11:31:17.719026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.636 [2024-11-05 11:31:17.826763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.896 [2024-11-05 11:31:18.025842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.896 [2024-11-05 11:31:18.025960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.156 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:19.156 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:19.156 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:19.156 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:19.156 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.156 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.156 BaseBdev1_malloc 00:15:19.156 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.156 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:19.156 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.156 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.156 [2024-11-05 11:31:18.409859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:19.156 [2024-11-05 11:31:18.409925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.156 [2024-11-05 11:31:18.409948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:19.156 [2024-11-05 11:31:18.409958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.156 [2024-11-05 11:31:18.412006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.156 [2024-11-05 11:31:18.412049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:19.156 BaseBdev1 00:15:19.156 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.156 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:19.156 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:19.156 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.156 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.417 BaseBdev2_malloc 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.417 [2024-11-05 11:31:18.464273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:19.417 [2024-11-05 11:31:18.464331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.417 [2024-11-05 11:31:18.464348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:19.417 [2024-11-05 11:31:18.464359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.417 [2024-11-05 11:31:18.466385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.417 [2024-11-05 11:31:18.466491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:19.417 BaseBdev2 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.417 BaseBdev3_malloc 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.417 [2024-11-05 11:31:18.532541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:19.417 [2024-11-05 11:31:18.532634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.417 [2024-11-05 11:31:18.532687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:19.417 [2024-11-05 11:31:18.532716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.417 [2024-11-05 11:31:18.534694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.417 [2024-11-05 11:31:18.534765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:19.417 BaseBdev3 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.417 BaseBdev4_malloc 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.417 [2024-11-05 11:31:18.587478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:19.417 [2024-11-05 11:31:18.587536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.417 [2024-11-05 11:31:18.587554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:19.417 [2024-11-05 11:31:18.587564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.417 [2024-11-05 11:31:18.589603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.417 [2024-11-05 11:31:18.589646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:19.417 BaseBdev4 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.417 spare_malloc 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.417 spare_delay 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.417 [2024-11-05 11:31:18.653722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:19.417 [2024-11-05 11:31:18.653776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.417 [2024-11-05 11:31:18.653794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:19.417 [2024-11-05 11:31:18.653804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.417 [2024-11-05 11:31:18.655844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.417 [2024-11-05 11:31:18.655885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:19.417 spare 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.417 [2024-11-05 11:31:18.665759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.417 [2024-11-05 11:31:18.667587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.417 [2024-11-05 11:31:18.667653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.417 [2024-11-05 11:31:18.667703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:19.417 [2024-11-05 11:31:18.667869] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:19.417 [2024-11-05 11:31:18.667886] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:19.417 [2024-11-05 11:31:18.668112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:19.417 [2024-11-05 11:31:18.668306] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:19.417 [2024-11-05 11:31:18.668317] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:19.417 [2024-11-05 11:31:18.668478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.417 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.677 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.677 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.677 "name": "raid_bdev1", 00:15:19.677 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:19.677 "strip_size_kb": 0, 00:15:19.677 "state": "online", 00:15:19.677 "raid_level": "raid1", 00:15:19.677 "superblock": true, 00:15:19.677 "num_base_bdevs": 4, 00:15:19.677 "num_base_bdevs_discovered": 4, 00:15:19.677 "num_base_bdevs_operational": 4, 00:15:19.677 "base_bdevs_list": [ 00:15:19.677 { 00:15:19.677 "name": "BaseBdev1", 00:15:19.677 "uuid": "d1d6508d-3fed-5a15-8c63-fe952c8ee6f7", 00:15:19.677 "is_configured": true, 00:15:19.677 "data_offset": 2048, 00:15:19.677 "data_size": 63488 00:15:19.677 }, 00:15:19.677 { 00:15:19.677 "name": "BaseBdev2", 00:15:19.677 "uuid": "c10b7b48-edb3-52d2-8617-07843178c8dc", 00:15:19.677 "is_configured": true, 00:15:19.677 "data_offset": 2048, 00:15:19.677 "data_size": 63488 00:15:19.677 }, 00:15:19.677 { 00:15:19.677 "name": "BaseBdev3", 00:15:19.677 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:19.677 "is_configured": true, 00:15:19.677 "data_offset": 2048, 00:15:19.677 "data_size": 63488 00:15:19.677 }, 00:15:19.677 { 00:15:19.677 "name": "BaseBdev4", 00:15:19.677 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:19.677 "is_configured": true, 00:15:19.677 "data_offset": 2048, 00:15:19.677 "data_size": 63488 00:15:19.677 } 00:15:19.677 ] 00:15:19.677 }' 00:15:19.677 11:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.677 11:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.937 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.937 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:19.937 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.937 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.937 [2024-11-05 11:31:19.125312] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.937 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.937 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:19.937 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.937 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.937 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:19.937 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.937 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.937 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:20.197 [2024-11-05 11:31:19.396593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:20.197 /dev/nbd0 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.197 1+0 records in 00:15:20.197 1+0 records out 00:15:20.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359297 s, 11.4 MB/s 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:20.197 11:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:25.570 63488+0 records in 00:15:25.570 63488+0 records out 00:15:25.570 32505856 bytes (33 MB, 31 MiB) copied, 4.84459 s, 6.7 MB/s 00:15:25.570 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:25.570 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.570 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:25.570 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:25.570 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:25.571 [2024-11-05 11:31:24.480648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.571 [2024-11-05 11:31:24.517858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.571 "name": "raid_bdev1", 00:15:25.571 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:25.571 "strip_size_kb": 0, 00:15:25.571 "state": "online", 00:15:25.571 "raid_level": "raid1", 00:15:25.571 "superblock": true, 00:15:25.571 "num_base_bdevs": 4, 00:15:25.571 "num_base_bdevs_discovered": 3, 00:15:25.571 "num_base_bdevs_operational": 3, 00:15:25.571 "base_bdevs_list": [ 00:15:25.571 { 00:15:25.571 "name": null, 00:15:25.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.571 "is_configured": false, 00:15:25.571 "data_offset": 0, 00:15:25.571 "data_size": 63488 00:15:25.571 }, 00:15:25.571 { 00:15:25.571 "name": "BaseBdev2", 00:15:25.571 "uuid": "c10b7b48-edb3-52d2-8617-07843178c8dc", 00:15:25.571 "is_configured": true, 00:15:25.571 "data_offset": 2048, 00:15:25.571 "data_size": 63488 00:15:25.571 }, 00:15:25.571 { 00:15:25.571 "name": "BaseBdev3", 00:15:25.571 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:25.571 "is_configured": true, 00:15:25.571 "data_offset": 2048, 00:15:25.571 "data_size": 63488 00:15:25.571 }, 00:15:25.571 { 00:15:25.571 "name": "BaseBdev4", 00:15:25.571 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:25.571 "is_configured": true, 00:15:25.571 "data_offset": 2048, 00:15:25.571 "data_size": 63488 00:15:25.571 } 00:15:25.571 ] 00:15:25.571 }' 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.571 11:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.831 11:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:25.831 11:31:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.831 11:31:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.831 [2024-11-05 11:31:25.009027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:25.831 [2024-11-05 11:31:25.024833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:15:25.831 11:31:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.831 11:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:25.831 [2024-11-05 11:31:25.026679] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:26.770 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.770 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.770 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.770 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.770 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.770 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.770 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.770 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.770 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.030 "name": "raid_bdev1", 00:15:27.030 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:27.030 "strip_size_kb": 0, 00:15:27.030 "state": "online", 00:15:27.030 "raid_level": "raid1", 00:15:27.030 "superblock": true, 00:15:27.030 "num_base_bdevs": 4, 00:15:27.030 "num_base_bdevs_discovered": 4, 00:15:27.030 "num_base_bdevs_operational": 4, 00:15:27.030 "process": { 00:15:27.030 "type": "rebuild", 00:15:27.030 "target": "spare", 00:15:27.030 "progress": { 00:15:27.030 "blocks": 20480, 00:15:27.030 "percent": 32 00:15:27.030 } 00:15:27.030 }, 00:15:27.030 "base_bdevs_list": [ 00:15:27.030 { 00:15:27.030 "name": "spare", 00:15:27.030 "uuid": "4153da2a-1f69-5399-bdd8-61d0930755af", 00:15:27.030 "is_configured": true, 00:15:27.030 "data_offset": 2048, 00:15:27.030 "data_size": 63488 00:15:27.030 }, 00:15:27.030 { 00:15:27.030 "name": "BaseBdev2", 00:15:27.030 "uuid": "c10b7b48-edb3-52d2-8617-07843178c8dc", 00:15:27.030 "is_configured": true, 00:15:27.030 "data_offset": 2048, 00:15:27.030 "data_size": 63488 00:15:27.030 }, 00:15:27.030 { 00:15:27.030 "name": "BaseBdev3", 00:15:27.030 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:27.030 "is_configured": true, 00:15:27.030 "data_offset": 2048, 00:15:27.030 "data_size": 63488 00:15:27.030 }, 00:15:27.030 { 00:15:27.030 "name": "BaseBdev4", 00:15:27.030 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:27.030 "is_configured": true, 00:15:27.030 "data_offset": 2048, 00:15:27.030 "data_size": 63488 00:15:27.030 } 00:15:27.030 ] 00:15:27.030 }' 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.030 [2024-11-05 11:31:26.189803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.030 [2024-11-05 11:31:26.231268] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:27.030 [2024-11-05 11:31:26.231340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.030 [2024-11-05 11:31:26.231356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.030 [2024-11-05 11:31:26.231365] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.030 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.031 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.031 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.290 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.290 "name": "raid_bdev1", 00:15:27.290 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:27.290 "strip_size_kb": 0, 00:15:27.290 "state": "online", 00:15:27.290 "raid_level": "raid1", 00:15:27.290 "superblock": true, 00:15:27.290 "num_base_bdevs": 4, 00:15:27.290 "num_base_bdevs_discovered": 3, 00:15:27.290 "num_base_bdevs_operational": 3, 00:15:27.290 "base_bdevs_list": [ 00:15:27.290 { 00:15:27.290 "name": null, 00:15:27.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.290 "is_configured": false, 00:15:27.290 "data_offset": 0, 00:15:27.290 "data_size": 63488 00:15:27.290 }, 00:15:27.290 { 00:15:27.290 "name": "BaseBdev2", 00:15:27.290 "uuid": "c10b7b48-edb3-52d2-8617-07843178c8dc", 00:15:27.290 "is_configured": true, 00:15:27.290 "data_offset": 2048, 00:15:27.290 "data_size": 63488 00:15:27.290 }, 00:15:27.290 { 00:15:27.290 "name": "BaseBdev3", 00:15:27.290 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:27.290 "is_configured": true, 00:15:27.290 "data_offset": 2048, 00:15:27.290 "data_size": 63488 00:15:27.290 }, 00:15:27.290 { 00:15:27.290 "name": "BaseBdev4", 00:15:27.291 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:27.291 "is_configured": true, 00:15:27.291 "data_offset": 2048, 00:15:27.291 "data_size": 63488 00:15:27.291 } 00:15:27.291 ] 00:15:27.291 }' 00:15:27.291 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.291 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.550 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.550 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.550 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.550 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.550 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.550 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.550 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.550 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.550 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.550 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.550 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.550 "name": "raid_bdev1", 00:15:27.551 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:27.551 "strip_size_kb": 0, 00:15:27.551 "state": "online", 00:15:27.551 "raid_level": "raid1", 00:15:27.551 "superblock": true, 00:15:27.551 "num_base_bdevs": 4, 00:15:27.551 "num_base_bdevs_discovered": 3, 00:15:27.551 "num_base_bdevs_operational": 3, 00:15:27.551 "base_bdevs_list": [ 00:15:27.551 { 00:15:27.551 "name": null, 00:15:27.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.551 "is_configured": false, 00:15:27.551 "data_offset": 0, 00:15:27.551 "data_size": 63488 00:15:27.551 }, 00:15:27.551 { 00:15:27.551 "name": "BaseBdev2", 00:15:27.551 "uuid": "c10b7b48-edb3-52d2-8617-07843178c8dc", 00:15:27.551 "is_configured": true, 00:15:27.551 "data_offset": 2048, 00:15:27.551 "data_size": 63488 00:15:27.551 }, 00:15:27.551 { 00:15:27.551 "name": "BaseBdev3", 00:15:27.551 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:27.551 "is_configured": true, 00:15:27.551 "data_offset": 2048, 00:15:27.551 "data_size": 63488 00:15:27.551 }, 00:15:27.551 { 00:15:27.551 "name": "BaseBdev4", 00:15:27.551 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:27.551 "is_configured": true, 00:15:27.551 "data_offset": 2048, 00:15:27.551 "data_size": 63488 00:15:27.551 } 00:15:27.551 ] 00:15:27.551 }' 00:15:27.551 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.551 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.551 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.810 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:27.810 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:27.810 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.810 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.810 [2024-11-05 11:31:26.847217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:27.810 [2024-11-05 11:31:26.861591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:15:27.810 11:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.810 11:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:27.810 [2024-11-05 11:31:26.863405] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:28.750 11:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.750 11:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.750 11:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.750 11:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.750 11:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.750 11:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.750 11:31:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.750 11:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.750 11:31:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.750 11:31:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.750 11:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.750 "name": "raid_bdev1", 00:15:28.750 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:28.750 "strip_size_kb": 0, 00:15:28.750 "state": "online", 00:15:28.750 "raid_level": "raid1", 00:15:28.750 "superblock": true, 00:15:28.750 "num_base_bdevs": 4, 00:15:28.750 "num_base_bdevs_discovered": 4, 00:15:28.750 "num_base_bdevs_operational": 4, 00:15:28.750 "process": { 00:15:28.750 "type": "rebuild", 00:15:28.750 "target": "spare", 00:15:28.750 "progress": { 00:15:28.750 "blocks": 20480, 00:15:28.750 "percent": 32 00:15:28.750 } 00:15:28.750 }, 00:15:28.750 "base_bdevs_list": [ 00:15:28.750 { 00:15:28.750 "name": "spare", 00:15:28.750 "uuid": "4153da2a-1f69-5399-bdd8-61d0930755af", 00:15:28.750 "is_configured": true, 00:15:28.750 "data_offset": 2048, 00:15:28.750 "data_size": 63488 00:15:28.750 }, 00:15:28.750 { 00:15:28.750 "name": "BaseBdev2", 00:15:28.750 "uuid": "c10b7b48-edb3-52d2-8617-07843178c8dc", 00:15:28.750 "is_configured": true, 00:15:28.750 "data_offset": 2048, 00:15:28.750 "data_size": 63488 00:15:28.750 }, 00:15:28.750 { 00:15:28.750 "name": "BaseBdev3", 00:15:28.750 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:28.750 "is_configured": true, 00:15:28.750 "data_offset": 2048, 00:15:28.750 "data_size": 63488 00:15:28.750 }, 00:15:28.750 { 00:15:28.750 "name": "BaseBdev4", 00:15:28.750 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:28.750 "is_configured": true, 00:15:28.750 "data_offset": 2048, 00:15:28.750 "data_size": 63488 00:15:28.750 } 00:15:28.750 ] 00:15:28.750 }' 00:15:28.750 11:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.750 11:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.750 11:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.750 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.750 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:28.750 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:28.750 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:28.750 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:28.750 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:28.750 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:28.750 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:28.750 11:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.750 11:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.750 [2024-11-05 11:31:28.023159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:29.011 [2024-11-05 11:31:28.167850] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.011 "name": "raid_bdev1", 00:15:29.011 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:29.011 "strip_size_kb": 0, 00:15:29.011 "state": "online", 00:15:29.011 "raid_level": "raid1", 00:15:29.011 "superblock": true, 00:15:29.011 "num_base_bdevs": 4, 00:15:29.011 "num_base_bdevs_discovered": 3, 00:15:29.011 "num_base_bdevs_operational": 3, 00:15:29.011 "process": { 00:15:29.011 "type": "rebuild", 00:15:29.011 "target": "spare", 00:15:29.011 "progress": { 00:15:29.011 "blocks": 24576, 00:15:29.011 "percent": 38 00:15:29.011 } 00:15:29.011 }, 00:15:29.011 "base_bdevs_list": [ 00:15:29.011 { 00:15:29.011 "name": "spare", 00:15:29.011 "uuid": "4153da2a-1f69-5399-bdd8-61d0930755af", 00:15:29.011 "is_configured": true, 00:15:29.011 "data_offset": 2048, 00:15:29.011 "data_size": 63488 00:15:29.011 }, 00:15:29.011 { 00:15:29.011 "name": null, 00:15:29.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.011 "is_configured": false, 00:15:29.011 "data_offset": 0, 00:15:29.011 "data_size": 63488 00:15:29.011 }, 00:15:29.011 { 00:15:29.011 "name": "BaseBdev3", 00:15:29.011 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:29.011 "is_configured": true, 00:15:29.011 "data_offset": 2048, 00:15:29.011 "data_size": 63488 00:15:29.011 }, 00:15:29.011 { 00:15:29.011 "name": "BaseBdev4", 00:15:29.011 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:29.011 "is_configured": true, 00:15:29.011 "data_offset": 2048, 00:15:29.011 "data_size": 63488 00:15:29.011 } 00:15:29.011 ] 00:15:29.011 }' 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.011 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=458 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.271 "name": "raid_bdev1", 00:15:29.271 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:29.271 "strip_size_kb": 0, 00:15:29.271 "state": "online", 00:15:29.271 "raid_level": "raid1", 00:15:29.271 "superblock": true, 00:15:29.271 "num_base_bdevs": 4, 00:15:29.271 "num_base_bdevs_discovered": 3, 00:15:29.271 "num_base_bdevs_operational": 3, 00:15:29.271 "process": { 00:15:29.271 "type": "rebuild", 00:15:29.271 "target": "spare", 00:15:29.271 "progress": { 00:15:29.271 "blocks": 26624, 00:15:29.271 "percent": 41 00:15:29.271 } 00:15:29.271 }, 00:15:29.271 "base_bdevs_list": [ 00:15:29.271 { 00:15:29.271 "name": "spare", 00:15:29.271 "uuid": "4153da2a-1f69-5399-bdd8-61d0930755af", 00:15:29.271 "is_configured": true, 00:15:29.271 "data_offset": 2048, 00:15:29.271 "data_size": 63488 00:15:29.271 }, 00:15:29.271 { 00:15:29.271 "name": null, 00:15:29.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.271 "is_configured": false, 00:15:29.271 "data_offset": 0, 00:15:29.271 "data_size": 63488 00:15:29.271 }, 00:15:29.271 { 00:15:29.271 "name": "BaseBdev3", 00:15:29.271 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:29.271 "is_configured": true, 00:15:29.271 "data_offset": 2048, 00:15:29.271 "data_size": 63488 00:15:29.271 }, 00:15:29.271 { 00:15:29.271 "name": "BaseBdev4", 00:15:29.271 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:29.271 "is_configured": true, 00:15:29.271 "data_offset": 2048, 00:15:29.271 "data_size": 63488 00:15:29.271 } 00:15:29.271 ] 00:15:29.271 }' 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.271 11:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.210 11:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.210 11:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.210 11:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.210 11:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.210 11:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.210 11:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.210 11:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.210 11:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.210 11:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.210 11:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.210 11:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.470 11:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.470 "name": "raid_bdev1", 00:15:30.470 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:30.470 "strip_size_kb": 0, 00:15:30.470 "state": "online", 00:15:30.470 "raid_level": "raid1", 00:15:30.470 "superblock": true, 00:15:30.470 "num_base_bdevs": 4, 00:15:30.470 "num_base_bdevs_discovered": 3, 00:15:30.470 "num_base_bdevs_operational": 3, 00:15:30.470 "process": { 00:15:30.470 "type": "rebuild", 00:15:30.470 "target": "spare", 00:15:30.470 "progress": { 00:15:30.470 "blocks": 49152, 00:15:30.470 "percent": 77 00:15:30.470 } 00:15:30.470 }, 00:15:30.470 "base_bdevs_list": [ 00:15:30.470 { 00:15:30.470 "name": "spare", 00:15:30.470 "uuid": "4153da2a-1f69-5399-bdd8-61d0930755af", 00:15:30.470 "is_configured": true, 00:15:30.470 "data_offset": 2048, 00:15:30.470 "data_size": 63488 00:15:30.470 }, 00:15:30.470 { 00:15:30.470 "name": null, 00:15:30.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.470 "is_configured": false, 00:15:30.470 "data_offset": 0, 00:15:30.470 "data_size": 63488 00:15:30.470 }, 00:15:30.470 { 00:15:30.470 "name": "BaseBdev3", 00:15:30.470 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:30.470 "is_configured": true, 00:15:30.470 "data_offset": 2048, 00:15:30.470 "data_size": 63488 00:15:30.470 }, 00:15:30.470 { 00:15:30.470 "name": "BaseBdev4", 00:15:30.470 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:30.470 "is_configured": true, 00:15:30.470 "data_offset": 2048, 00:15:30.470 "data_size": 63488 00:15:30.470 } 00:15:30.470 ] 00:15:30.470 }' 00:15:30.470 11:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.470 11:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.470 11:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.470 11:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.471 11:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:31.040 [2024-11-05 11:31:30.074970] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:31.040 [2024-11-05 11:31:30.075034] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:31.040 [2024-11-05 11:31:30.075173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.610 "name": "raid_bdev1", 00:15:31.610 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:31.610 "strip_size_kb": 0, 00:15:31.610 "state": "online", 00:15:31.610 "raid_level": "raid1", 00:15:31.610 "superblock": true, 00:15:31.610 "num_base_bdevs": 4, 00:15:31.610 "num_base_bdevs_discovered": 3, 00:15:31.610 "num_base_bdevs_operational": 3, 00:15:31.610 "base_bdevs_list": [ 00:15:31.610 { 00:15:31.610 "name": "spare", 00:15:31.610 "uuid": "4153da2a-1f69-5399-bdd8-61d0930755af", 00:15:31.610 "is_configured": true, 00:15:31.610 "data_offset": 2048, 00:15:31.610 "data_size": 63488 00:15:31.610 }, 00:15:31.610 { 00:15:31.610 "name": null, 00:15:31.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.610 "is_configured": false, 00:15:31.610 "data_offset": 0, 00:15:31.610 "data_size": 63488 00:15:31.610 }, 00:15:31.610 { 00:15:31.610 "name": "BaseBdev3", 00:15:31.610 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:31.610 "is_configured": true, 00:15:31.610 "data_offset": 2048, 00:15:31.610 "data_size": 63488 00:15:31.610 }, 00:15:31.610 { 00:15:31.610 "name": "BaseBdev4", 00:15:31.610 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:31.610 "is_configured": true, 00:15:31.610 "data_offset": 2048, 00:15:31.610 "data_size": 63488 00:15:31.610 } 00:15:31.610 ] 00:15:31.610 }' 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.610 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.610 "name": "raid_bdev1", 00:15:31.610 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:31.610 "strip_size_kb": 0, 00:15:31.610 "state": "online", 00:15:31.610 "raid_level": "raid1", 00:15:31.610 "superblock": true, 00:15:31.610 "num_base_bdevs": 4, 00:15:31.610 "num_base_bdevs_discovered": 3, 00:15:31.610 "num_base_bdevs_operational": 3, 00:15:31.610 "base_bdevs_list": [ 00:15:31.610 { 00:15:31.610 "name": "spare", 00:15:31.610 "uuid": "4153da2a-1f69-5399-bdd8-61d0930755af", 00:15:31.610 "is_configured": true, 00:15:31.610 "data_offset": 2048, 00:15:31.610 "data_size": 63488 00:15:31.610 }, 00:15:31.611 { 00:15:31.611 "name": null, 00:15:31.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.611 "is_configured": false, 00:15:31.611 "data_offset": 0, 00:15:31.611 "data_size": 63488 00:15:31.611 }, 00:15:31.611 { 00:15:31.611 "name": "BaseBdev3", 00:15:31.611 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:31.611 "is_configured": true, 00:15:31.611 "data_offset": 2048, 00:15:31.611 "data_size": 63488 00:15:31.611 }, 00:15:31.611 { 00:15:31.611 "name": "BaseBdev4", 00:15:31.611 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:31.611 "is_configured": true, 00:15:31.611 "data_offset": 2048, 00:15:31.611 "data_size": 63488 00:15:31.611 } 00:15:31.611 ] 00:15:31.611 }' 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.611 11:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.871 11:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.871 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.871 "name": "raid_bdev1", 00:15:31.871 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:31.871 "strip_size_kb": 0, 00:15:31.871 "state": "online", 00:15:31.871 "raid_level": "raid1", 00:15:31.871 "superblock": true, 00:15:31.871 "num_base_bdevs": 4, 00:15:31.871 "num_base_bdevs_discovered": 3, 00:15:31.871 "num_base_bdevs_operational": 3, 00:15:31.871 "base_bdevs_list": [ 00:15:31.871 { 00:15:31.871 "name": "spare", 00:15:31.871 "uuid": "4153da2a-1f69-5399-bdd8-61d0930755af", 00:15:31.871 "is_configured": true, 00:15:31.871 "data_offset": 2048, 00:15:31.871 "data_size": 63488 00:15:31.871 }, 00:15:31.871 { 00:15:31.871 "name": null, 00:15:31.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.871 "is_configured": false, 00:15:31.871 "data_offset": 0, 00:15:31.871 "data_size": 63488 00:15:31.871 }, 00:15:31.871 { 00:15:31.871 "name": "BaseBdev3", 00:15:31.871 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:31.871 "is_configured": true, 00:15:31.871 "data_offset": 2048, 00:15:31.871 "data_size": 63488 00:15:31.871 }, 00:15:31.871 { 00:15:31.871 "name": "BaseBdev4", 00:15:31.871 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:31.871 "is_configured": true, 00:15:31.871 "data_offset": 2048, 00:15:31.871 "data_size": 63488 00:15:31.871 } 00:15:31.871 ] 00:15:31.871 }' 00:15:31.871 11:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.871 11:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.131 [2024-11-05 11:31:31.281346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:32.131 [2024-11-05 11:31:31.281417] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.131 [2024-11-05 11:31:31.281530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.131 [2024-11-05 11:31:31.281620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:32.131 [2024-11-05 11:31:31.281662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:32.131 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:32.391 /dev/nbd0 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.391 1+0 records in 00:15:32.391 1+0 records out 00:15:32.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465647 s, 8.8 MB/s 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:32.391 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:32.651 /dev/nbd1 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.651 1+0 records in 00:15:32.651 1+0 records out 00:15:32.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041609 s, 9.8 MB/s 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:32.651 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:32.912 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:32.912 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.912 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:32.912 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:32.912 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:32.912 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.912 11:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:32.912 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.172 [2024-11-05 11:31:32.416282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:33.172 [2024-11-05 11:31:32.416348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.172 [2024-11-05 11:31:32.416369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:33.172 [2024-11-05 11:31:32.416378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.172 [2024-11-05 11:31:32.418508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.172 [2024-11-05 11:31:32.418545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:33.172 [2024-11-05 11:31:32.418635] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:33.172 [2024-11-05 11:31:32.418682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.172 [2024-11-05 11:31:32.418824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:33.172 [2024-11-05 11:31:32.418921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:33.172 spare 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.172 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.431 [2024-11-05 11:31:32.518814] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:33.431 [2024-11-05 11:31:32.518852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:33.431 [2024-11-05 11:31:32.519162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:33.431 [2024-11-05 11:31:32.519433] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:33.431 [2024-11-05 11:31:32.519453] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:33.431 [2024-11-05 11:31:32.519619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.431 "name": "raid_bdev1", 00:15:33.431 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:33.431 "strip_size_kb": 0, 00:15:33.431 "state": "online", 00:15:33.431 "raid_level": "raid1", 00:15:33.431 "superblock": true, 00:15:33.431 "num_base_bdevs": 4, 00:15:33.431 "num_base_bdevs_discovered": 3, 00:15:33.431 "num_base_bdevs_operational": 3, 00:15:33.431 "base_bdevs_list": [ 00:15:33.431 { 00:15:33.431 "name": "spare", 00:15:33.431 "uuid": "4153da2a-1f69-5399-bdd8-61d0930755af", 00:15:33.431 "is_configured": true, 00:15:33.431 "data_offset": 2048, 00:15:33.431 "data_size": 63488 00:15:33.431 }, 00:15:33.431 { 00:15:33.431 "name": null, 00:15:33.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.431 "is_configured": false, 00:15:33.431 "data_offset": 2048, 00:15:33.431 "data_size": 63488 00:15:33.431 }, 00:15:33.431 { 00:15:33.431 "name": "BaseBdev3", 00:15:33.431 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:33.431 "is_configured": true, 00:15:33.431 "data_offset": 2048, 00:15:33.431 "data_size": 63488 00:15:33.431 }, 00:15:33.431 { 00:15:33.431 "name": "BaseBdev4", 00:15:33.431 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:33.431 "is_configured": true, 00:15:33.431 "data_offset": 2048, 00:15:33.431 "data_size": 63488 00:15:33.431 } 00:15:33.431 ] 00:15:33.431 }' 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.431 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.000 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:34.000 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.000 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:34.000 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:34.000 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.000 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.000 11:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.000 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.000 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.000 11:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.000 "name": "raid_bdev1", 00:15:34.000 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:34.000 "strip_size_kb": 0, 00:15:34.000 "state": "online", 00:15:34.000 "raid_level": "raid1", 00:15:34.000 "superblock": true, 00:15:34.000 "num_base_bdevs": 4, 00:15:34.000 "num_base_bdevs_discovered": 3, 00:15:34.000 "num_base_bdevs_operational": 3, 00:15:34.000 "base_bdevs_list": [ 00:15:34.000 { 00:15:34.000 "name": "spare", 00:15:34.000 "uuid": "4153da2a-1f69-5399-bdd8-61d0930755af", 00:15:34.000 "is_configured": true, 00:15:34.000 "data_offset": 2048, 00:15:34.000 "data_size": 63488 00:15:34.000 }, 00:15:34.000 { 00:15:34.000 "name": null, 00:15:34.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.000 "is_configured": false, 00:15:34.000 "data_offset": 2048, 00:15:34.000 "data_size": 63488 00:15:34.000 }, 00:15:34.000 { 00:15:34.000 "name": "BaseBdev3", 00:15:34.000 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:34.000 "is_configured": true, 00:15:34.000 "data_offset": 2048, 00:15:34.000 "data_size": 63488 00:15:34.000 }, 00:15:34.000 { 00:15:34.000 "name": "BaseBdev4", 00:15:34.000 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:34.000 "is_configured": true, 00:15:34.000 "data_offset": 2048, 00:15:34.000 "data_size": 63488 00:15:34.000 } 00:15:34.000 ] 00:15:34.000 }' 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.000 [2024-11-05 11:31:33.143224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.000 "name": "raid_bdev1", 00:15:34.000 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:34.000 "strip_size_kb": 0, 00:15:34.000 "state": "online", 00:15:34.000 "raid_level": "raid1", 00:15:34.000 "superblock": true, 00:15:34.000 "num_base_bdevs": 4, 00:15:34.000 "num_base_bdevs_discovered": 2, 00:15:34.000 "num_base_bdevs_operational": 2, 00:15:34.000 "base_bdevs_list": [ 00:15:34.000 { 00:15:34.000 "name": null, 00:15:34.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.000 "is_configured": false, 00:15:34.000 "data_offset": 0, 00:15:34.000 "data_size": 63488 00:15:34.000 }, 00:15:34.000 { 00:15:34.000 "name": null, 00:15:34.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.000 "is_configured": false, 00:15:34.000 "data_offset": 2048, 00:15:34.000 "data_size": 63488 00:15:34.000 }, 00:15:34.000 { 00:15:34.000 "name": "BaseBdev3", 00:15:34.000 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:34.000 "is_configured": true, 00:15:34.000 "data_offset": 2048, 00:15:34.000 "data_size": 63488 00:15:34.000 }, 00:15:34.000 { 00:15:34.000 "name": "BaseBdev4", 00:15:34.000 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:34.000 "is_configured": true, 00:15:34.000 "data_offset": 2048, 00:15:34.000 "data_size": 63488 00:15:34.000 } 00:15:34.000 ] 00:15:34.000 }' 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.000 11:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.581 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:34.581 11:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.581 11:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.581 [2024-11-05 11:31:33.602468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.581 [2024-11-05 11:31:33.602667] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:34.581 [2024-11-05 11:31:33.602692] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:34.581 [2024-11-05 11:31:33.602721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.581 [2024-11-05 11:31:33.616891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:15:34.581 11:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.581 11:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:34.581 [2024-11-05 11:31:33.618691] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:35.534 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.534 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.534 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.534 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.534 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.534 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.534 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.534 11:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.535 11:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.535 11:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.535 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.535 "name": "raid_bdev1", 00:15:35.535 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:35.535 "strip_size_kb": 0, 00:15:35.535 "state": "online", 00:15:35.535 "raid_level": "raid1", 00:15:35.535 "superblock": true, 00:15:35.535 "num_base_bdevs": 4, 00:15:35.535 "num_base_bdevs_discovered": 3, 00:15:35.535 "num_base_bdevs_operational": 3, 00:15:35.535 "process": { 00:15:35.535 "type": "rebuild", 00:15:35.535 "target": "spare", 00:15:35.535 "progress": { 00:15:35.535 "blocks": 20480, 00:15:35.535 "percent": 32 00:15:35.535 } 00:15:35.535 }, 00:15:35.535 "base_bdevs_list": [ 00:15:35.535 { 00:15:35.535 "name": "spare", 00:15:35.535 "uuid": "4153da2a-1f69-5399-bdd8-61d0930755af", 00:15:35.535 "is_configured": true, 00:15:35.535 "data_offset": 2048, 00:15:35.535 "data_size": 63488 00:15:35.535 }, 00:15:35.535 { 00:15:35.535 "name": null, 00:15:35.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.535 "is_configured": false, 00:15:35.535 "data_offset": 2048, 00:15:35.535 "data_size": 63488 00:15:35.535 }, 00:15:35.535 { 00:15:35.535 "name": "BaseBdev3", 00:15:35.535 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:35.535 "is_configured": true, 00:15:35.535 "data_offset": 2048, 00:15:35.535 "data_size": 63488 00:15:35.535 }, 00:15:35.535 { 00:15:35.535 "name": "BaseBdev4", 00:15:35.535 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:35.535 "is_configured": true, 00:15:35.535 "data_offset": 2048, 00:15:35.535 "data_size": 63488 00:15:35.535 } 00:15:35.535 ] 00:15:35.535 }' 00:15:35.535 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.535 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.535 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.535 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.535 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:35.535 11:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.535 11:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.535 [2024-11-05 11:31:34.762974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.794 [2024-11-05 11:31:34.823340] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:35.794 [2024-11-05 11:31:34.823392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.794 [2024-11-05 11:31:34.823412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.795 [2024-11-05 11:31:34.823418] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.795 "name": "raid_bdev1", 00:15:35.795 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:35.795 "strip_size_kb": 0, 00:15:35.795 "state": "online", 00:15:35.795 "raid_level": "raid1", 00:15:35.795 "superblock": true, 00:15:35.795 "num_base_bdevs": 4, 00:15:35.795 "num_base_bdevs_discovered": 2, 00:15:35.795 "num_base_bdevs_operational": 2, 00:15:35.795 "base_bdevs_list": [ 00:15:35.795 { 00:15:35.795 "name": null, 00:15:35.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.795 "is_configured": false, 00:15:35.795 "data_offset": 0, 00:15:35.795 "data_size": 63488 00:15:35.795 }, 00:15:35.795 { 00:15:35.795 "name": null, 00:15:35.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.795 "is_configured": false, 00:15:35.795 "data_offset": 2048, 00:15:35.795 "data_size": 63488 00:15:35.795 }, 00:15:35.795 { 00:15:35.795 "name": "BaseBdev3", 00:15:35.795 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:35.795 "is_configured": true, 00:15:35.795 "data_offset": 2048, 00:15:35.795 "data_size": 63488 00:15:35.795 }, 00:15:35.795 { 00:15:35.795 "name": "BaseBdev4", 00:15:35.795 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:35.795 "is_configured": true, 00:15:35.795 "data_offset": 2048, 00:15:35.795 "data_size": 63488 00:15:35.795 } 00:15:35.795 ] 00:15:35.795 }' 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.795 11:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.055 11:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:36.055 11:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.055 11:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.055 [2024-11-05 11:31:35.308578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:36.055 [2024-11-05 11:31:35.308632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.055 [2024-11-05 11:31:35.308659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:36.055 [2024-11-05 11:31:35.308668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.055 [2024-11-05 11:31:35.309117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.055 [2024-11-05 11:31:35.309148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:36.055 [2024-11-05 11:31:35.309239] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:36.055 [2024-11-05 11:31:35.309250] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:36.055 [2024-11-05 11:31:35.309264] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:36.055 [2024-11-05 11:31:35.309294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.055 [2024-11-05 11:31:35.323543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:15:36.055 spare 00:15:36.055 11:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.055 [2024-11-05 11:31:35.325338] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.055 11:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.435 "name": "raid_bdev1", 00:15:37.435 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:37.435 "strip_size_kb": 0, 00:15:37.435 "state": "online", 00:15:37.435 "raid_level": "raid1", 00:15:37.435 "superblock": true, 00:15:37.435 "num_base_bdevs": 4, 00:15:37.435 "num_base_bdevs_discovered": 3, 00:15:37.435 "num_base_bdevs_operational": 3, 00:15:37.435 "process": { 00:15:37.435 "type": "rebuild", 00:15:37.435 "target": "spare", 00:15:37.435 "progress": { 00:15:37.435 "blocks": 20480, 00:15:37.435 "percent": 32 00:15:37.435 } 00:15:37.435 }, 00:15:37.435 "base_bdevs_list": [ 00:15:37.435 { 00:15:37.435 "name": "spare", 00:15:37.435 "uuid": "4153da2a-1f69-5399-bdd8-61d0930755af", 00:15:37.435 "is_configured": true, 00:15:37.435 "data_offset": 2048, 00:15:37.435 "data_size": 63488 00:15:37.435 }, 00:15:37.435 { 00:15:37.435 "name": null, 00:15:37.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.435 "is_configured": false, 00:15:37.435 "data_offset": 2048, 00:15:37.435 "data_size": 63488 00:15:37.435 }, 00:15:37.435 { 00:15:37.435 "name": "BaseBdev3", 00:15:37.435 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:37.435 "is_configured": true, 00:15:37.435 "data_offset": 2048, 00:15:37.435 "data_size": 63488 00:15:37.435 }, 00:15:37.435 { 00:15:37.435 "name": "BaseBdev4", 00:15:37.435 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:37.435 "is_configured": true, 00:15:37.435 "data_offset": 2048, 00:15:37.435 "data_size": 63488 00:15:37.435 } 00:15:37.435 ] 00:15:37.435 }' 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.435 [2024-11-05 11:31:36.489070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.435 [2024-11-05 11:31:36.529962] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:37.435 [2024-11-05 11:31:36.530013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.435 [2024-11-05 11:31:36.530027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.435 [2024-11-05 11:31:36.530035] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.435 "name": "raid_bdev1", 00:15:37.435 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:37.435 "strip_size_kb": 0, 00:15:37.435 "state": "online", 00:15:37.435 "raid_level": "raid1", 00:15:37.435 "superblock": true, 00:15:37.435 "num_base_bdevs": 4, 00:15:37.435 "num_base_bdevs_discovered": 2, 00:15:37.435 "num_base_bdevs_operational": 2, 00:15:37.435 "base_bdevs_list": [ 00:15:37.435 { 00:15:37.435 "name": null, 00:15:37.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.435 "is_configured": false, 00:15:37.435 "data_offset": 0, 00:15:37.435 "data_size": 63488 00:15:37.435 }, 00:15:37.435 { 00:15:37.435 "name": null, 00:15:37.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.435 "is_configured": false, 00:15:37.435 "data_offset": 2048, 00:15:37.435 "data_size": 63488 00:15:37.435 }, 00:15:37.435 { 00:15:37.435 "name": "BaseBdev3", 00:15:37.435 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:37.435 "is_configured": true, 00:15:37.435 "data_offset": 2048, 00:15:37.435 "data_size": 63488 00:15:37.435 }, 00:15:37.435 { 00:15:37.435 "name": "BaseBdev4", 00:15:37.435 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:37.435 "is_configured": true, 00:15:37.435 "data_offset": 2048, 00:15:37.435 "data_size": 63488 00:15:37.435 } 00:15:37.435 ] 00:15:37.435 }' 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.435 11:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.005 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:38.005 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.005 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:38.005 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:38.005 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.005 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.005 11:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.005 11:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.005 11:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.005 11:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.005 11:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.005 "name": "raid_bdev1", 00:15:38.005 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:38.005 "strip_size_kb": 0, 00:15:38.005 "state": "online", 00:15:38.005 "raid_level": "raid1", 00:15:38.005 "superblock": true, 00:15:38.005 "num_base_bdevs": 4, 00:15:38.005 "num_base_bdevs_discovered": 2, 00:15:38.005 "num_base_bdevs_operational": 2, 00:15:38.005 "base_bdevs_list": [ 00:15:38.005 { 00:15:38.005 "name": null, 00:15:38.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.005 "is_configured": false, 00:15:38.005 "data_offset": 0, 00:15:38.005 "data_size": 63488 00:15:38.005 }, 00:15:38.005 { 00:15:38.005 "name": null, 00:15:38.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.005 "is_configured": false, 00:15:38.006 "data_offset": 2048, 00:15:38.006 "data_size": 63488 00:15:38.006 }, 00:15:38.006 { 00:15:38.006 "name": "BaseBdev3", 00:15:38.006 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:38.006 "is_configured": true, 00:15:38.006 "data_offset": 2048, 00:15:38.006 "data_size": 63488 00:15:38.006 }, 00:15:38.006 { 00:15:38.006 "name": "BaseBdev4", 00:15:38.006 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:38.006 "is_configured": true, 00:15:38.006 "data_offset": 2048, 00:15:38.006 "data_size": 63488 00:15:38.006 } 00:15:38.006 ] 00:15:38.006 }' 00:15:38.006 11:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.006 11:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:38.006 11:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.006 11:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:38.006 11:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:38.006 11:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.006 11:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.006 11:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.006 11:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:38.006 11:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.006 11:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.006 [2024-11-05 11:31:37.134502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:38.006 [2024-11-05 11:31:37.134567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.006 [2024-11-05 11:31:37.134587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:38.006 [2024-11-05 11:31:37.134597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.006 [2024-11-05 11:31:37.135041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.006 [2024-11-05 11:31:37.135060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:38.006 [2024-11-05 11:31:37.135158] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:38.006 [2024-11-05 11:31:37.135175] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:38.006 [2024-11-05 11:31:37.135184] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:38.006 [2024-11-05 11:31:37.135208] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:38.006 BaseBdev1 00:15:38.006 11:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.006 11:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.944 "name": "raid_bdev1", 00:15:38.944 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:38.944 "strip_size_kb": 0, 00:15:38.944 "state": "online", 00:15:38.944 "raid_level": "raid1", 00:15:38.944 "superblock": true, 00:15:38.944 "num_base_bdevs": 4, 00:15:38.944 "num_base_bdevs_discovered": 2, 00:15:38.944 "num_base_bdevs_operational": 2, 00:15:38.944 "base_bdevs_list": [ 00:15:38.944 { 00:15:38.944 "name": null, 00:15:38.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.944 "is_configured": false, 00:15:38.944 "data_offset": 0, 00:15:38.944 "data_size": 63488 00:15:38.944 }, 00:15:38.944 { 00:15:38.944 "name": null, 00:15:38.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.944 "is_configured": false, 00:15:38.944 "data_offset": 2048, 00:15:38.944 "data_size": 63488 00:15:38.944 }, 00:15:38.944 { 00:15:38.944 "name": "BaseBdev3", 00:15:38.944 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:38.944 "is_configured": true, 00:15:38.944 "data_offset": 2048, 00:15:38.944 "data_size": 63488 00:15:38.944 }, 00:15:38.944 { 00:15:38.944 "name": "BaseBdev4", 00:15:38.944 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:38.944 "is_configured": true, 00:15:38.944 "data_offset": 2048, 00:15:38.944 "data_size": 63488 00:15:38.944 } 00:15:38.944 ] 00:15:38.944 }' 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.944 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.513 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.513 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.513 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.513 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.513 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.513 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.513 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.513 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.513 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.513 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.513 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.513 "name": "raid_bdev1", 00:15:39.513 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:39.513 "strip_size_kb": 0, 00:15:39.513 "state": "online", 00:15:39.513 "raid_level": "raid1", 00:15:39.513 "superblock": true, 00:15:39.513 "num_base_bdevs": 4, 00:15:39.513 "num_base_bdevs_discovered": 2, 00:15:39.513 "num_base_bdevs_operational": 2, 00:15:39.513 "base_bdevs_list": [ 00:15:39.513 { 00:15:39.513 "name": null, 00:15:39.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.513 "is_configured": false, 00:15:39.513 "data_offset": 0, 00:15:39.513 "data_size": 63488 00:15:39.513 }, 00:15:39.513 { 00:15:39.513 "name": null, 00:15:39.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.513 "is_configured": false, 00:15:39.513 "data_offset": 2048, 00:15:39.513 "data_size": 63488 00:15:39.513 }, 00:15:39.513 { 00:15:39.513 "name": "BaseBdev3", 00:15:39.513 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:39.513 "is_configured": true, 00:15:39.513 "data_offset": 2048, 00:15:39.513 "data_size": 63488 00:15:39.513 }, 00:15:39.513 { 00:15:39.513 "name": "BaseBdev4", 00:15:39.513 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:39.513 "is_configured": true, 00:15:39.513 "data_offset": 2048, 00:15:39.513 "data_size": 63488 00:15:39.513 } 00:15:39.513 ] 00:15:39.513 }' 00:15:39.513 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.513 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.513 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.513 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.514 [2024-11-05 11:31:38.751712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.514 [2024-11-05 11:31:38.751909] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:39.514 [2024-11-05 11:31:38.751930] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:39.514 request: 00:15:39.514 { 00:15:39.514 "base_bdev": "BaseBdev1", 00:15:39.514 "raid_bdev": "raid_bdev1", 00:15:39.514 "method": "bdev_raid_add_base_bdev", 00:15:39.514 "req_id": 1 00:15:39.514 } 00:15:39.514 Got JSON-RPC error response 00:15:39.514 response: 00:15:39.514 { 00:15:39.514 "code": -22, 00:15:39.514 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:39.514 } 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:39.514 11:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.906 "name": "raid_bdev1", 00:15:40.906 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:40.906 "strip_size_kb": 0, 00:15:40.906 "state": "online", 00:15:40.906 "raid_level": "raid1", 00:15:40.906 "superblock": true, 00:15:40.906 "num_base_bdevs": 4, 00:15:40.906 "num_base_bdevs_discovered": 2, 00:15:40.906 "num_base_bdevs_operational": 2, 00:15:40.906 "base_bdevs_list": [ 00:15:40.906 { 00:15:40.906 "name": null, 00:15:40.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.906 "is_configured": false, 00:15:40.906 "data_offset": 0, 00:15:40.906 "data_size": 63488 00:15:40.906 }, 00:15:40.906 { 00:15:40.906 "name": null, 00:15:40.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.906 "is_configured": false, 00:15:40.906 "data_offset": 2048, 00:15:40.906 "data_size": 63488 00:15:40.906 }, 00:15:40.906 { 00:15:40.906 "name": "BaseBdev3", 00:15:40.906 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:40.906 "is_configured": true, 00:15:40.906 "data_offset": 2048, 00:15:40.906 "data_size": 63488 00:15:40.906 }, 00:15:40.906 { 00:15:40.906 "name": "BaseBdev4", 00:15:40.906 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:40.906 "is_configured": true, 00:15:40.906 "data_offset": 2048, 00:15:40.906 "data_size": 63488 00:15:40.906 } 00:15:40.906 ] 00:15:40.906 }' 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.906 11:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.906 11:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:40.906 11:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.906 11:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:40.906 11:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:40.906 11:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.906 11:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.906 11:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.906 11:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.906 11:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.166 "name": "raid_bdev1", 00:15:41.166 "uuid": "66e8e3c1-c737-4135-8660-0ccfdefea11d", 00:15:41.166 "strip_size_kb": 0, 00:15:41.166 "state": "online", 00:15:41.166 "raid_level": "raid1", 00:15:41.166 "superblock": true, 00:15:41.166 "num_base_bdevs": 4, 00:15:41.166 "num_base_bdevs_discovered": 2, 00:15:41.166 "num_base_bdevs_operational": 2, 00:15:41.166 "base_bdevs_list": [ 00:15:41.166 { 00:15:41.166 "name": null, 00:15:41.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.166 "is_configured": false, 00:15:41.166 "data_offset": 0, 00:15:41.166 "data_size": 63488 00:15:41.166 }, 00:15:41.166 { 00:15:41.166 "name": null, 00:15:41.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.166 "is_configured": false, 00:15:41.166 "data_offset": 2048, 00:15:41.166 "data_size": 63488 00:15:41.166 }, 00:15:41.166 { 00:15:41.166 "name": "BaseBdev3", 00:15:41.166 "uuid": "7a4313bb-13ce-5e22-9007-d897840859ca", 00:15:41.166 "is_configured": true, 00:15:41.166 "data_offset": 2048, 00:15:41.166 "data_size": 63488 00:15:41.166 }, 00:15:41.166 { 00:15:41.166 "name": "BaseBdev4", 00:15:41.166 "uuid": "adbf239d-6155-5c91-85af-08794b0c889b", 00:15:41.166 "is_configured": true, 00:15:41.166 "data_offset": 2048, 00:15:41.166 "data_size": 63488 00:15:41.166 } 00:15:41.166 ] 00:15:41.166 }' 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78061 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78061 ']' 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 78061 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78061 00:15:41.166 killing process with pid 78061 00:15:41.166 Received shutdown signal, test time was about 60.000000 seconds 00:15:41.166 00:15:41.166 Latency(us) 00:15:41.166 [2024-11-05T11:31:40.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.166 [2024-11-05T11:31:40.440Z] =================================================================================================================== 00:15:41.166 [2024-11-05T11:31:40.440Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78061' 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 78061 00:15:41.166 [2024-11-05 11:31:40.319309] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:41.166 [2024-11-05 11:31:40.319424] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.166 11:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 78061 00:15:41.166 [2024-11-05 11:31:40.319490] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.166 [2024-11-05 11:31:40.319499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:41.736 [2024-11-05 11:31:40.786988] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:42.676 00:15:42.676 real 0m24.379s 00:15:42.676 user 0m29.486s 00:15:42.676 sys 0m3.705s 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.676 ************************************ 00:15:42.676 END TEST raid_rebuild_test_sb 00:15:42.676 ************************************ 00:15:42.676 11:31:41 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:15:42.676 11:31:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:42.676 11:31:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:42.676 11:31:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:42.676 ************************************ 00:15:42.676 START TEST raid_rebuild_test_io 00:15:42.676 ************************************ 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78811 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78811 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 78811 ']' 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:42.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:42.676 11:31:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.936 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:42.936 Zero copy mechanism will not be used. 00:15:42.936 [2024-11-05 11:31:42.002460] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:15:42.936 [2024-11-05 11:31:42.002588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78811 ] 00:15:42.936 [2024-11-05 11:31:42.167535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.196 [2024-11-05 11:31:42.270555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.196 [2024-11-05 11:31:42.453297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.196 [2024-11-05 11:31:42.453354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.765 BaseBdev1_malloc 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.765 [2024-11-05 11:31:42.860907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:43.765 [2024-11-05 11:31:42.860986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.765 [2024-11-05 11:31:42.861008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:43.765 [2024-11-05 11:31:42.861019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.765 [2024-11-05 11:31:42.863044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.765 [2024-11-05 11:31:42.863104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:43.765 BaseBdev1 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.765 BaseBdev2_malloc 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.765 [2024-11-05 11:31:42.913494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:43.765 [2024-11-05 11:31:42.913561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.765 [2024-11-05 11:31:42.913579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:43.765 [2024-11-05 11:31:42.913589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.765 [2024-11-05 11:31:42.915587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.765 [2024-11-05 11:31:42.915626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:43.765 BaseBdev2 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.765 BaseBdev3_malloc 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.765 11:31:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.765 [2024-11-05 11:31:42.999991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:43.765 [2024-11-05 11:31:43.000099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.765 [2024-11-05 11:31:43.000122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:43.765 [2024-11-05 11:31:43.000133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.765 [2024-11-05 11:31:43.002100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.765 [2024-11-05 11:31:43.002150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:43.765 BaseBdev3 00:15:43.765 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.765 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:43.765 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:43.765 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.765 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.025 BaseBdev4_malloc 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.025 [2024-11-05 11:31:43.052212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:44.025 [2024-11-05 11:31:43.052259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.025 [2024-11-05 11:31:43.052278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:44.025 [2024-11-05 11:31:43.052288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.025 [2024-11-05 11:31:43.054328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.025 [2024-11-05 11:31:43.054400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:44.025 BaseBdev4 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.025 spare_malloc 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.025 spare_delay 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.025 [2024-11-05 11:31:43.115919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:44.025 [2024-11-05 11:31:43.115972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.025 [2024-11-05 11:31:43.115992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:44.025 [2024-11-05 11:31:43.116002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.025 [2024-11-05 11:31:43.117999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.025 [2024-11-05 11:31:43.118038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:44.025 spare 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.025 [2024-11-05 11:31:43.127949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.025 [2024-11-05 11:31:43.129673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.025 [2024-11-05 11:31:43.129739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.025 [2024-11-05 11:31:43.129786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:44.025 [2024-11-05 11:31:43.129870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:44.025 [2024-11-05 11:31:43.129882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:44.025 [2024-11-05 11:31:43.130108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:44.025 [2024-11-05 11:31:43.130270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:44.025 [2024-11-05 11:31:43.130282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:44.025 [2024-11-05 11:31:43.130413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.025 "name": "raid_bdev1", 00:15:44.025 "uuid": "73909410-53dc-491a-afe3-9b549bc1cbed", 00:15:44.025 "strip_size_kb": 0, 00:15:44.025 "state": "online", 00:15:44.025 "raid_level": "raid1", 00:15:44.025 "superblock": false, 00:15:44.025 "num_base_bdevs": 4, 00:15:44.025 "num_base_bdevs_discovered": 4, 00:15:44.025 "num_base_bdevs_operational": 4, 00:15:44.025 "base_bdevs_list": [ 00:15:44.025 { 00:15:44.025 "name": "BaseBdev1", 00:15:44.025 "uuid": "8337f961-9f95-50bd-8b7c-abeaad80ace8", 00:15:44.025 "is_configured": true, 00:15:44.025 "data_offset": 0, 00:15:44.025 "data_size": 65536 00:15:44.025 }, 00:15:44.025 { 00:15:44.025 "name": "BaseBdev2", 00:15:44.025 "uuid": "1592d069-35b7-57a0-ad03-378a52f210a6", 00:15:44.025 "is_configured": true, 00:15:44.025 "data_offset": 0, 00:15:44.025 "data_size": 65536 00:15:44.025 }, 00:15:44.025 { 00:15:44.025 "name": "BaseBdev3", 00:15:44.025 "uuid": "429f5bae-1150-534c-a9d1-f06230207deb", 00:15:44.025 "is_configured": true, 00:15:44.025 "data_offset": 0, 00:15:44.025 "data_size": 65536 00:15:44.025 }, 00:15:44.025 { 00:15:44.025 "name": "BaseBdev4", 00:15:44.025 "uuid": "08f45348-79cb-5eff-9e7e-800199452c88", 00:15:44.025 "is_configured": true, 00:15:44.025 "data_offset": 0, 00:15:44.025 "data_size": 65536 00:15:44.025 } 00:15:44.025 ] 00:15:44.025 }' 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.025 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.595 [2024-11-05 11:31:43.631509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.595 [2024-11-05 11:31:43.714980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.595 "name": "raid_bdev1", 00:15:44.595 "uuid": "73909410-53dc-491a-afe3-9b549bc1cbed", 00:15:44.595 "strip_size_kb": 0, 00:15:44.595 "state": "online", 00:15:44.595 "raid_level": "raid1", 00:15:44.595 "superblock": false, 00:15:44.595 "num_base_bdevs": 4, 00:15:44.595 "num_base_bdevs_discovered": 3, 00:15:44.595 "num_base_bdevs_operational": 3, 00:15:44.595 "base_bdevs_list": [ 00:15:44.595 { 00:15:44.595 "name": null, 00:15:44.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.595 "is_configured": false, 00:15:44.595 "data_offset": 0, 00:15:44.595 "data_size": 65536 00:15:44.595 }, 00:15:44.595 { 00:15:44.595 "name": "BaseBdev2", 00:15:44.595 "uuid": "1592d069-35b7-57a0-ad03-378a52f210a6", 00:15:44.595 "is_configured": true, 00:15:44.595 "data_offset": 0, 00:15:44.595 "data_size": 65536 00:15:44.595 }, 00:15:44.595 { 00:15:44.595 "name": "BaseBdev3", 00:15:44.595 "uuid": "429f5bae-1150-534c-a9d1-f06230207deb", 00:15:44.595 "is_configured": true, 00:15:44.595 "data_offset": 0, 00:15:44.595 "data_size": 65536 00:15:44.595 }, 00:15:44.595 { 00:15:44.595 "name": "BaseBdev4", 00:15:44.595 "uuid": "08f45348-79cb-5eff-9e7e-800199452c88", 00:15:44.595 "is_configured": true, 00:15:44.595 "data_offset": 0, 00:15:44.595 "data_size": 65536 00:15:44.595 } 00:15:44.595 ] 00:15:44.595 }' 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.595 11:31:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.595 [2024-11-05 11:31:43.802504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:44.595 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:44.595 Zero copy mechanism will not be used. 00:15:44.595 Running I/O for 60 seconds... 00:15:45.165 11:31:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:45.165 11:31:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.165 11:31:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.165 [2024-11-05 11:31:44.150579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:45.165 11:31:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.165 11:31:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:45.165 [2024-11-05 11:31:44.214470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:45.165 [2024-11-05 11:31:44.216371] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:45.165 [2024-11-05 11:31:44.324857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:45.165 [2024-11-05 11:31:44.325384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:45.425 [2024-11-05 11:31:44.536276] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:45.425 [2024-11-05 11:31:44.536696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:45.684 [2024-11-05 11:31:44.769938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:45.684 [2024-11-05 11:31:44.770493] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:45.684 165.00 IOPS, 495.00 MiB/s [2024-11-05T11:31:44.958Z] [2024-11-05 11:31:44.907968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:45.684 [2024-11-05 11:31:44.908675] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:45.944 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.944 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.944 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.944 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.944 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.944 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.944 11:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.944 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.944 11:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.944 11:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.204 "name": "raid_bdev1", 00:15:46.204 "uuid": "73909410-53dc-491a-afe3-9b549bc1cbed", 00:15:46.204 "strip_size_kb": 0, 00:15:46.204 "state": "online", 00:15:46.204 "raid_level": "raid1", 00:15:46.204 "superblock": false, 00:15:46.204 "num_base_bdevs": 4, 00:15:46.204 "num_base_bdevs_discovered": 4, 00:15:46.204 "num_base_bdevs_operational": 4, 00:15:46.204 "process": { 00:15:46.204 "type": "rebuild", 00:15:46.204 "target": "spare", 00:15:46.204 "progress": { 00:15:46.204 "blocks": 12288, 00:15:46.204 "percent": 18 00:15:46.204 } 00:15:46.204 }, 00:15:46.204 "base_bdevs_list": [ 00:15:46.204 { 00:15:46.204 "name": "spare", 00:15:46.204 "uuid": "2d9b3a1e-a309-5dc9-a645-8c3671b93882", 00:15:46.204 "is_configured": true, 00:15:46.204 "data_offset": 0, 00:15:46.204 "data_size": 65536 00:15:46.204 }, 00:15:46.204 { 00:15:46.204 "name": "BaseBdev2", 00:15:46.204 "uuid": "1592d069-35b7-57a0-ad03-378a52f210a6", 00:15:46.204 "is_configured": true, 00:15:46.204 "data_offset": 0, 00:15:46.204 "data_size": 65536 00:15:46.204 }, 00:15:46.204 { 00:15:46.204 "name": "BaseBdev3", 00:15:46.204 "uuid": "429f5bae-1150-534c-a9d1-f06230207deb", 00:15:46.204 "is_configured": true, 00:15:46.204 "data_offset": 0, 00:15:46.204 "data_size": 65536 00:15:46.204 }, 00:15:46.204 { 00:15:46.204 "name": "BaseBdev4", 00:15:46.204 "uuid": "08f45348-79cb-5eff-9e7e-800199452c88", 00:15:46.204 "is_configured": true, 00:15:46.204 "data_offset": 0, 00:15:46.204 "data_size": 65536 00:15:46.204 } 00:15:46.204 ] 00:15:46.204 }' 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.204 [2024-11-05 11:31:45.339453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.204 [2024-11-05 11:31:45.396438] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:46.204 [2024-11-05 11:31:45.406753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.204 [2024-11-05 11:31:45.406835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.204 [2024-11-05 11:31:45.406855] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:46.204 [2024-11-05 11:31:45.439802] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.204 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.464 11:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.464 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.464 "name": "raid_bdev1", 00:15:46.464 "uuid": "73909410-53dc-491a-afe3-9b549bc1cbed", 00:15:46.464 "strip_size_kb": 0, 00:15:46.464 "state": "online", 00:15:46.464 "raid_level": "raid1", 00:15:46.464 "superblock": false, 00:15:46.464 "num_base_bdevs": 4, 00:15:46.464 "num_base_bdevs_discovered": 3, 00:15:46.464 "num_base_bdevs_operational": 3, 00:15:46.464 "base_bdevs_list": [ 00:15:46.464 { 00:15:46.464 "name": null, 00:15:46.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.464 "is_configured": false, 00:15:46.464 "data_offset": 0, 00:15:46.464 "data_size": 65536 00:15:46.464 }, 00:15:46.464 { 00:15:46.464 "name": "BaseBdev2", 00:15:46.464 "uuid": "1592d069-35b7-57a0-ad03-378a52f210a6", 00:15:46.464 "is_configured": true, 00:15:46.464 "data_offset": 0, 00:15:46.464 "data_size": 65536 00:15:46.464 }, 00:15:46.464 { 00:15:46.464 "name": "BaseBdev3", 00:15:46.464 "uuid": "429f5bae-1150-534c-a9d1-f06230207deb", 00:15:46.464 "is_configured": true, 00:15:46.464 "data_offset": 0, 00:15:46.464 "data_size": 65536 00:15:46.464 }, 00:15:46.464 { 00:15:46.464 "name": "BaseBdev4", 00:15:46.464 "uuid": "08f45348-79cb-5eff-9e7e-800199452c88", 00:15:46.464 "is_configured": true, 00:15:46.464 "data_offset": 0, 00:15:46.464 "data_size": 65536 00:15:46.464 } 00:15:46.464 ] 00:15:46.464 }' 00:15:46.464 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.464 11:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.733 156.00 IOPS, 468.00 MiB/s [2024-11-05T11:31:46.007Z] 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.733 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.733 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.733 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.733 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.733 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.733 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.733 11:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.733 11:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.733 11:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.733 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.733 "name": "raid_bdev1", 00:15:46.733 "uuid": "73909410-53dc-491a-afe3-9b549bc1cbed", 00:15:46.733 "strip_size_kb": 0, 00:15:46.733 "state": "online", 00:15:46.733 "raid_level": "raid1", 00:15:46.733 "superblock": false, 00:15:46.733 "num_base_bdevs": 4, 00:15:46.733 "num_base_bdevs_discovered": 3, 00:15:46.734 "num_base_bdevs_operational": 3, 00:15:46.734 "base_bdevs_list": [ 00:15:46.734 { 00:15:46.734 "name": null, 00:15:46.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.734 "is_configured": false, 00:15:46.734 "data_offset": 0, 00:15:46.734 "data_size": 65536 00:15:46.734 }, 00:15:46.734 { 00:15:46.734 "name": "BaseBdev2", 00:15:46.734 "uuid": "1592d069-35b7-57a0-ad03-378a52f210a6", 00:15:46.734 "is_configured": true, 00:15:46.734 "data_offset": 0, 00:15:46.734 "data_size": 65536 00:15:46.734 }, 00:15:46.734 { 00:15:46.734 "name": "BaseBdev3", 00:15:46.734 "uuid": "429f5bae-1150-534c-a9d1-f06230207deb", 00:15:46.734 "is_configured": true, 00:15:46.734 "data_offset": 0, 00:15:46.734 "data_size": 65536 00:15:46.734 }, 00:15:46.734 { 00:15:46.734 "name": "BaseBdev4", 00:15:46.734 "uuid": "08f45348-79cb-5eff-9e7e-800199452c88", 00:15:46.734 "is_configured": true, 00:15:46.734 "data_offset": 0, 00:15:46.734 "data_size": 65536 00:15:46.734 } 00:15:46.734 ] 00:15:46.734 }' 00:15:46.734 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.734 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:47.031 11:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.031 11:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:47.031 11:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:47.031 11:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.031 11:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.031 [2024-11-05 11:31:46.056690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:47.031 11:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.031 11:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:47.031 [2024-11-05 11:31:46.129762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:47.031 [2024-11-05 11:31:46.131734] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:47.031 [2024-11-05 11:31:46.233739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:47.031 [2024-11-05 11:31:46.234268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:47.291 [2024-11-05 11:31:46.444829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:47.291 [2024-11-05 11:31:46.445563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:47.550 [2024-11-05 11:31:46.793440] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:47.810 141.67 IOPS, 425.00 MiB/s [2024-11-05T11:31:47.084Z] [2024-11-05 11:31:46.916554] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:47.810 [2024-11-05 11:31:46.917283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.071 "name": "raid_bdev1", 00:15:48.071 "uuid": "73909410-53dc-491a-afe3-9b549bc1cbed", 00:15:48.071 "strip_size_kb": 0, 00:15:48.071 "state": "online", 00:15:48.071 "raid_level": "raid1", 00:15:48.071 "superblock": false, 00:15:48.071 "num_base_bdevs": 4, 00:15:48.071 "num_base_bdevs_discovered": 4, 00:15:48.071 "num_base_bdevs_operational": 4, 00:15:48.071 "process": { 00:15:48.071 "type": "rebuild", 00:15:48.071 "target": "spare", 00:15:48.071 "progress": { 00:15:48.071 "blocks": 10240, 00:15:48.071 "percent": 15 00:15:48.071 } 00:15:48.071 }, 00:15:48.071 "base_bdevs_list": [ 00:15:48.071 { 00:15:48.071 "name": "spare", 00:15:48.071 "uuid": "2d9b3a1e-a309-5dc9-a645-8c3671b93882", 00:15:48.071 "is_configured": true, 00:15:48.071 "data_offset": 0, 00:15:48.071 "data_size": 65536 00:15:48.071 }, 00:15:48.071 { 00:15:48.071 "name": "BaseBdev2", 00:15:48.071 "uuid": "1592d069-35b7-57a0-ad03-378a52f210a6", 00:15:48.071 "is_configured": true, 00:15:48.071 "data_offset": 0, 00:15:48.071 "data_size": 65536 00:15:48.071 }, 00:15:48.071 { 00:15:48.071 "name": "BaseBdev3", 00:15:48.071 "uuid": "429f5bae-1150-534c-a9d1-f06230207deb", 00:15:48.071 "is_configured": true, 00:15:48.071 "data_offset": 0, 00:15:48.071 "data_size": 65536 00:15:48.071 }, 00:15:48.071 { 00:15:48.071 "name": "BaseBdev4", 00:15:48.071 "uuid": "08f45348-79cb-5eff-9e7e-800199452c88", 00:15:48.071 "is_configured": true, 00:15:48.071 "data_offset": 0, 00:15:48.071 "data_size": 65536 00:15:48.071 } 00:15:48.071 ] 00:15:48.071 }' 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.071 [2024-11-05 11:31:47.215302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:48.071 [2024-11-05 11:31:47.253056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:48.071 [2024-11-05 11:31:47.253707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:48.071 [2024-11-05 11:31:47.259947] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:48.071 [2024-11-05 11:31:47.260012] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.071 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.072 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.072 11:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.072 11:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.072 11:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.072 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.072 "name": "raid_bdev1", 00:15:48.072 "uuid": "73909410-53dc-491a-afe3-9b549bc1cbed", 00:15:48.072 "strip_size_kb": 0, 00:15:48.072 "state": "online", 00:15:48.072 "raid_level": "raid1", 00:15:48.072 "superblock": false, 00:15:48.072 "num_base_bdevs": 4, 00:15:48.072 "num_base_bdevs_discovered": 3, 00:15:48.072 "num_base_bdevs_operational": 3, 00:15:48.072 "process": { 00:15:48.072 "type": "rebuild", 00:15:48.072 "target": "spare", 00:15:48.072 "progress": { 00:15:48.072 "blocks": 14336, 00:15:48.072 "percent": 21 00:15:48.072 } 00:15:48.072 }, 00:15:48.072 "base_bdevs_list": [ 00:15:48.072 { 00:15:48.072 "name": "spare", 00:15:48.072 "uuid": "2d9b3a1e-a309-5dc9-a645-8c3671b93882", 00:15:48.072 "is_configured": true, 00:15:48.072 "data_offset": 0, 00:15:48.072 "data_size": 65536 00:15:48.072 }, 00:15:48.072 { 00:15:48.072 "name": null, 00:15:48.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.072 "is_configured": false, 00:15:48.072 "data_offset": 0, 00:15:48.072 "data_size": 65536 00:15:48.072 }, 00:15:48.072 { 00:15:48.072 "name": "BaseBdev3", 00:15:48.072 "uuid": "429f5bae-1150-534c-a9d1-f06230207deb", 00:15:48.072 "is_configured": true, 00:15:48.072 "data_offset": 0, 00:15:48.072 "data_size": 65536 00:15:48.072 }, 00:15:48.072 { 00:15:48.072 "name": "BaseBdev4", 00:15:48.072 "uuid": "08f45348-79cb-5eff-9e7e-800199452c88", 00:15:48.072 "is_configured": true, 00:15:48.072 "data_offset": 0, 00:15:48.072 "data_size": 65536 00:15:48.072 } 00:15:48.072 ] 00:15:48.072 }' 00:15:48.072 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.331 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.331 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.331 [2024-11-05 11:31:47.402312] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:48.331 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.331 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=477 00:15:48.331 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.331 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.331 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.331 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.331 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.331 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.331 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.331 11:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.331 11:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.331 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.332 11:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.332 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.332 "name": "raid_bdev1", 00:15:48.332 "uuid": "73909410-53dc-491a-afe3-9b549bc1cbed", 00:15:48.332 "strip_size_kb": 0, 00:15:48.332 "state": "online", 00:15:48.332 "raid_level": "raid1", 00:15:48.332 "superblock": false, 00:15:48.332 "num_base_bdevs": 4, 00:15:48.332 "num_base_bdevs_discovered": 3, 00:15:48.332 "num_base_bdevs_operational": 3, 00:15:48.332 "process": { 00:15:48.332 "type": "rebuild", 00:15:48.332 "target": "spare", 00:15:48.332 "progress": { 00:15:48.332 "blocks": 16384, 00:15:48.332 "percent": 25 00:15:48.332 } 00:15:48.332 }, 00:15:48.332 "base_bdevs_list": [ 00:15:48.332 { 00:15:48.332 "name": "spare", 00:15:48.332 "uuid": "2d9b3a1e-a309-5dc9-a645-8c3671b93882", 00:15:48.332 "is_configured": true, 00:15:48.332 "data_offset": 0, 00:15:48.332 "data_size": 65536 00:15:48.332 }, 00:15:48.332 { 00:15:48.332 "name": null, 00:15:48.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.332 "is_configured": false, 00:15:48.332 "data_offset": 0, 00:15:48.332 "data_size": 65536 00:15:48.332 }, 00:15:48.332 { 00:15:48.332 "name": "BaseBdev3", 00:15:48.332 "uuid": "429f5bae-1150-534c-a9d1-f06230207deb", 00:15:48.332 "is_configured": true, 00:15:48.332 "data_offset": 0, 00:15:48.332 "data_size": 65536 00:15:48.332 }, 00:15:48.332 { 00:15:48.332 "name": "BaseBdev4", 00:15:48.332 "uuid": "08f45348-79cb-5eff-9e7e-800199452c88", 00:15:48.332 "is_configured": true, 00:15:48.332 "data_offset": 0, 00:15:48.332 "data_size": 65536 00:15:48.332 } 00:15:48.332 ] 00:15:48.332 }' 00:15:48.332 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.332 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.332 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.332 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.332 11:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.591 [2024-11-05 11:31:47.759450] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:48.851 122.75 IOPS, 368.25 MiB/s [2024-11-05T11:31:48.125Z] [2024-11-05 11:31:47.969436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:49.110 [2024-11-05 11:31:48.273964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:49.110 [2024-11-05 11:31:48.383262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:49.370 11:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.370 11:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.370 11:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.370 11:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.370 11:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.370 11:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.370 11:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.370 11:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.371 11:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.371 11:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.371 11:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.371 11:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.371 "name": "raid_bdev1", 00:15:49.371 "uuid": "73909410-53dc-491a-afe3-9b549bc1cbed", 00:15:49.371 "strip_size_kb": 0, 00:15:49.371 "state": "online", 00:15:49.371 "raid_level": "raid1", 00:15:49.371 "superblock": false, 00:15:49.371 "num_base_bdevs": 4, 00:15:49.371 "num_base_bdevs_discovered": 3, 00:15:49.371 "num_base_bdevs_operational": 3, 00:15:49.371 "process": { 00:15:49.371 "type": "rebuild", 00:15:49.371 "target": "spare", 00:15:49.371 "progress": { 00:15:49.371 "blocks": 30720, 00:15:49.371 "percent": 46 00:15:49.371 } 00:15:49.371 }, 00:15:49.371 "base_bdevs_list": [ 00:15:49.371 { 00:15:49.371 "name": "spare", 00:15:49.371 "uuid": "2d9b3a1e-a309-5dc9-a645-8c3671b93882", 00:15:49.371 "is_configured": true, 00:15:49.371 "data_offset": 0, 00:15:49.371 "data_size": 65536 00:15:49.371 }, 00:15:49.371 { 00:15:49.371 "name": null, 00:15:49.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.371 "is_configured": false, 00:15:49.371 "data_offset": 0, 00:15:49.371 "data_size": 65536 00:15:49.371 }, 00:15:49.371 { 00:15:49.371 "name": "BaseBdev3", 00:15:49.371 "uuid": "429f5bae-1150-534c-a9d1-f06230207deb", 00:15:49.371 "is_configured": true, 00:15:49.371 "data_offset": 0, 00:15:49.371 "data_size": 65536 00:15:49.371 }, 00:15:49.371 { 00:15:49.371 "name": "BaseBdev4", 00:15:49.371 "uuid": "08f45348-79cb-5eff-9e7e-800199452c88", 00:15:49.371 "is_configured": true, 00:15:49.371 "data_offset": 0, 00:15:49.371 "data_size": 65536 00:15:49.371 } 00:15:49.371 ] 00:15:49.371 }' 00:15:49.371 11:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.631 11:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.631 11:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.631 11:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.631 11:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.631 [2024-11-05 11:31:48.738081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:50.201 109.40 IOPS, 328.20 MiB/s [2024-11-05T11:31:49.475Z] [2024-11-05 11:31:49.369680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:50.461 [2024-11-05 11:31:49.485548] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:50.461 11:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:50.461 11:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.461 11:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.461 11:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.461 11:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.461 11:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.461 11:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.461 11:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.461 11:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.461 11:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.461 11:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.721 11:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.721 "name": "raid_bdev1", 00:15:50.721 "uuid": "73909410-53dc-491a-afe3-9b549bc1cbed", 00:15:50.721 "strip_size_kb": 0, 00:15:50.721 "state": "online", 00:15:50.721 "raid_level": "raid1", 00:15:50.721 "superblock": false, 00:15:50.721 "num_base_bdevs": 4, 00:15:50.721 "num_base_bdevs_discovered": 3, 00:15:50.721 "num_base_bdevs_operational": 3, 00:15:50.721 "process": { 00:15:50.721 "type": "rebuild", 00:15:50.721 "target": "spare", 00:15:50.721 "progress": { 00:15:50.721 "blocks": 47104, 00:15:50.721 "percent": 71 00:15:50.721 } 00:15:50.721 }, 00:15:50.721 "base_bdevs_list": [ 00:15:50.721 { 00:15:50.721 "name": "spare", 00:15:50.721 "uuid": "2d9b3a1e-a309-5dc9-a645-8c3671b93882", 00:15:50.721 "is_configured": true, 00:15:50.721 "data_offset": 0, 00:15:50.721 "data_size": 65536 00:15:50.721 }, 00:15:50.721 { 00:15:50.721 "name": null, 00:15:50.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.721 "is_configured": false, 00:15:50.721 "data_offset": 0, 00:15:50.721 "data_size": 65536 00:15:50.721 }, 00:15:50.721 { 00:15:50.721 "name": "BaseBdev3", 00:15:50.721 "uuid": "429f5bae-1150-534c-a9d1-f06230207deb", 00:15:50.721 "is_configured": true, 00:15:50.721 "data_offset": 0, 00:15:50.721 "data_size": 65536 00:15:50.721 }, 00:15:50.721 { 00:15:50.721 "name": "BaseBdev4", 00:15:50.721 "uuid": "08f45348-79cb-5eff-9e7e-800199452c88", 00:15:50.721 "is_configured": true, 00:15:50.721 "data_offset": 0, 00:15:50.721 "data_size": 65536 00:15:50.721 } 00:15:50.721 ] 00:15:50.721 }' 00:15:50.721 11:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.721 11:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.721 11:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.721 96.83 IOPS, 290.50 MiB/s [2024-11-05T11:31:49.995Z] 11:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.721 11:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:50.721 [2024-11-05 11:31:49.930443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:50.721 [2024-11-05 11:31:49.930733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:51.660 [2024-11-05 11:31:50.601417] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:51.660 [2024-11-05 11:31:50.701231] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:51.660 [2024-11-05 11:31:50.702838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.660 87.14 IOPS, 261.43 MiB/s [2024-11-05T11:31:50.934Z] 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:51.660 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.660 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.660 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.660 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.660 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.660 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.660 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.660 11:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.660 11:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.660 11:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.660 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.660 "name": "raid_bdev1", 00:15:51.660 "uuid": "73909410-53dc-491a-afe3-9b549bc1cbed", 00:15:51.660 "strip_size_kb": 0, 00:15:51.660 "state": "online", 00:15:51.660 "raid_level": "raid1", 00:15:51.660 "superblock": false, 00:15:51.660 "num_base_bdevs": 4, 00:15:51.660 "num_base_bdevs_discovered": 3, 00:15:51.660 "num_base_bdevs_operational": 3, 00:15:51.660 "base_bdevs_list": [ 00:15:51.660 { 00:15:51.660 "name": "spare", 00:15:51.660 "uuid": "2d9b3a1e-a309-5dc9-a645-8c3671b93882", 00:15:51.660 "is_configured": true, 00:15:51.660 "data_offset": 0, 00:15:51.660 "data_size": 65536 00:15:51.660 }, 00:15:51.660 { 00:15:51.660 "name": null, 00:15:51.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.660 "is_configured": false, 00:15:51.660 "data_offset": 0, 00:15:51.660 "data_size": 65536 00:15:51.660 }, 00:15:51.660 { 00:15:51.660 "name": "BaseBdev3", 00:15:51.660 "uuid": "429f5bae-1150-534c-a9d1-f06230207deb", 00:15:51.660 "is_configured": true, 00:15:51.660 "data_offset": 0, 00:15:51.660 "data_size": 65536 00:15:51.660 }, 00:15:51.660 { 00:15:51.660 "name": "BaseBdev4", 00:15:51.660 "uuid": "08f45348-79cb-5eff-9e7e-800199452c88", 00:15:51.660 "is_configured": true, 00:15:51.660 "data_offset": 0, 00:15:51.660 "data_size": 65536 00:15:51.660 } 00:15:51.660 ] 00:15:51.660 }' 00:15:51.660 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.921 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:51.921 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.921 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:51.921 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:51.921 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.921 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.921 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.921 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.921 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.921 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.921 11:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.921 11:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.921 11:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.921 "name": "raid_bdev1", 00:15:51.921 "uuid": "73909410-53dc-491a-afe3-9b549bc1cbed", 00:15:51.921 "strip_size_kb": 0, 00:15:51.921 "state": "online", 00:15:51.921 "raid_level": "raid1", 00:15:51.921 "superblock": false, 00:15:51.921 "num_base_bdevs": 4, 00:15:51.921 "num_base_bdevs_discovered": 3, 00:15:51.921 "num_base_bdevs_operational": 3, 00:15:51.921 "base_bdevs_list": [ 00:15:51.921 { 00:15:51.921 "name": "spare", 00:15:51.921 "uuid": "2d9b3a1e-a309-5dc9-a645-8c3671b93882", 00:15:51.921 "is_configured": true, 00:15:51.921 "data_offset": 0, 00:15:51.921 "data_size": 65536 00:15:51.921 }, 00:15:51.921 { 00:15:51.921 "name": null, 00:15:51.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.921 "is_configured": false, 00:15:51.921 "data_offset": 0, 00:15:51.921 "data_size": 65536 00:15:51.921 }, 00:15:51.921 { 00:15:51.921 "name": "BaseBdev3", 00:15:51.921 "uuid": "429f5bae-1150-534c-a9d1-f06230207deb", 00:15:51.921 "is_configured": true, 00:15:51.921 "data_offset": 0, 00:15:51.921 "data_size": 65536 00:15:51.921 }, 00:15:51.921 { 00:15:51.921 "name": "BaseBdev4", 00:15:51.921 "uuid": "08f45348-79cb-5eff-9e7e-800199452c88", 00:15:51.921 "is_configured": true, 00:15:51.921 "data_offset": 0, 00:15:51.921 "data_size": 65536 00:15:51.921 } 00:15:51.921 ] 00:15:51.921 }' 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.921 "name": "raid_bdev1", 00:15:51.921 "uuid": "73909410-53dc-491a-afe3-9b549bc1cbed", 00:15:51.921 "strip_size_kb": 0, 00:15:51.921 "state": "online", 00:15:51.921 "raid_level": "raid1", 00:15:51.921 "superblock": false, 00:15:51.921 "num_base_bdevs": 4, 00:15:51.921 "num_base_bdevs_discovered": 3, 00:15:51.921 "num_base_bdevs_operational": 3, 00:15:51.921 "base_bdevs_list": [ 00:15:51.921 { 00:15:51.921 "name": "spare", 00:15:51.921 "uuid": "2d9b3a1e-a309-5dc9-a645-8c3671b93882", 00:15:51.921 "is_configured": true, 00:15:51.921 "data_offset": 0, 00:15:51.921 "data_size": 65536 00:15:51.921 }, 00:15:51.921 { 00:15:51.921 "name": null, 00:15:51.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.921 "is_configured": false, 00:15:51.921 "data_offset": 0, 00:15:51.921 "data_size": 65536 00:15:51.921 }, 00:15:51.921 { 00:15:51.921 "name": "BaseBdev3", 00:15:51.921 "uuid": "429f5bae-1150-534c-a9d1-f06230207deb", 00:15:51.921 "is_configured": true, 00:15:51.921 "data_offset": 0, 00:15:51.921 "data_size": 65536 00:15:51.921 }, 00:15:51.921 { 00:15:51.921 "name": "BaseBdev4", 00:15:51.921 "uuid": "08f45348-79cb-5eff-9e7e-800199452c88", 00:15:51.921 "is_configured": true, 00:15:51.921 "data_offset": 0, 00:15:51.921 "data_size": 65536 00:15:51.921 } 00:15:51.921 ] 00:15:51.921 }' 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.921 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.491 [2024-11-05 11:31:51.558069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:52.491 [2024-11-05 11:31:51.558097] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.491 00:15:52.491 Latency(us) 00:15:52.491 [2024-11-05T11:31:51.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.491 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:52.491 raid_bdev1 : 7.80 82.22 246.65 0.00 0.00 16684.62 327.32 117220.72 00:15:52.491 [2024-11-05T11:31:51.765Z] =================================================================================================================== 00:15:52.491 [2024-11-05T11:31:51.765Z] Total : 82.22 246.65 0.00 0.00 16684.62 327.32 117220.72 00:15:52.491 { 00:15:52.491 "results": [ 00:15:52.491 { 00:15:52.491 "job": "raid_bdev1", 00:15:52.491 "core_mask": "0x1", 00:15:52.491 "workload": "randrw", 00:15:52.491 "percentage": 50, 00:15:52.491 "status": "finished", 00:15:52.491 "queue_depth": 2, 00:15:52.491 "io_size": 3145728, 00:15:52.491 "runtime": 7.796362, 00:15:52.491 "iops": 82.21783442072085, 00:15:52.491 "mibps": 246.65350326216253, 00:15:52.491 "io_failed": 0, 00:15:52.491 "io_timeout": 0, 00:15:52.491 "avg_latency_us": 16684.62372521102, 00:15:52.491 "min_latency_us": 327.32227074235806, 00:15:52.491 "max_latency_us": 117220.7231441048 00:15:52.491 } 00:15:52.491 ], 00:15:52.491 "core_count": 1 00:15:52.491 } 00:15:52.491 [2024-11-05 11:31:51.607250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.491 [2024-11-05 11:31:51.607292] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.491 [2024-11-05 11:31:51.607391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.491 [2024-11-05 11:31:51.607404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:52.491 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:52.751 /dev/nbd0 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.751 1+0 records in 00:15:52.751 1+0 records out 00:15:52.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444134 s, 9.2 MB/s 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:52.751 11:31:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:53.011 /dev/nbd1 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:53.011 1+0 records in 00:15:53.011 1+0 records out 00:15:53.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396216 s, 10.3 MB/s 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:53.011 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:53.271 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:53.271 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:53.271 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:53.271 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:53.271 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:53.271 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:53.271 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:53.271 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:53.271 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:53.271 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:53.271 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:53.272 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:53.531 /dev/nbd1 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:53.532 1+0 records in 00:15:53.532 1+0 records out 00:15:53.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234028 s, 17.5 MB/s 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:53.532 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:53.791 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:53.792 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:53.792 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:53.792 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:53.792 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:53.792 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:53.792 11:31:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:53.792 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:53.792 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:53.792 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:53.792 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.792 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.792 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:53.792 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:53.792 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.792 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:53.792 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:53.792 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:53.792 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:53.792 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:53.792 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:53.792 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78811 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 78811 ']' 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 78811 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78811 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:54.052 killing process with pid 78811 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78811' 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 78811 00:15:54.052 Received shutdown signal, test time was about 9.508414 seconds 00:15:54.052 00:15:54.052 Latency(us) 00:15:54.052 [2024-11-05T11:31:53.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.052 [2024-11-05T11:31:53.326Z] =================================================================================================================== 00:15:54.052 [2024-11-05T11:31:53.326Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:54.052 [2024-11-05 11:31:53.294703] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.052 11:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 78811 00:15:54.621 [2024-11-05 11:31:53.688799] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.560 11:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:55.560 00:15:55.560 real 0m12.881s 00:15:55.560 user 0m16.297s 00:15:55.560 sys 0m1.751s 00:15:55.560 11:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:55.560 11:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.560 ************************************ 00:15:55.560 END TEST raid_rebuild_test_io 00:15:55.560 ************************************ 00:15:55.821 11:31:54 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:55.821 11:31:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:55.821 11:31:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:55.821 11:31:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:55.821 ************************************ 00:15:55.821 START TEST raid_rebuild_test_sb_io 00:15:55.821 ************************************ 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:55.821 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:55.822 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:55.822 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:55.822 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:55.822 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:55.822 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:55.822 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79209 00:15:55.822 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79209 00:15:55.822 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:55.822 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 79209 ']' 00:15:55.822 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.822 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:55.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.822 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.822 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:55.822 11:31:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.822 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:55.822 Zero copy mechanism will not be used. 00:15:55.822 [2024-11-05 11:31:54.954452] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:15:55.822 [2024-11-05 11:31:54.954554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79209 ] 00:15:56.082 [2024-11-05 11:31:55.125843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.082 [2024-11-05 11:31:55.225419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.341 [2024-11-05 11:31:55.408482] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.341 [2024-11-05 11:31:55.408522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.602 BaseBdev1_malloc 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.602 [2024-11-05 11:31:55.806587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:56.602 [2024-11-05 11:31:55.806661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.602 [2024-11-05 11:31:55.806679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:56.602 [2024-11-05 11:31:55.806690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.602 [2024-11-05 11:31:55.808747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.602 [2024-11-05 11:31:55.808783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:56.602 BaseBdev1 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.602 BaseBdev2_malloc 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.602 [2024-11-05 11:31:55.858700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:56.602 [2024-11-05 11:31:55.858753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.602 [2024-11-05 11:31:55.858770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:56.602 [2024-11-05 11:31:55.858781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.602 [2024-11-05 11:31:55.860802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.602 [2024-11-05 11:31:55.860833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:56.602 BaseBdev2 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.602 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.862 BaseBdev3_malloc 00:15:56.862 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.862 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:56.862 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.862 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.862 [2024-11-05 11:31:55.943751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:56.862 [2024-11-05 11:31:55.943796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.862 [2024-11-05 11:31:55.943814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:56.862 [2024-11-05 11:31:55.943824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.862 [2024-11-05 11:31:55.945798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.862 [2024-11-05 11:31:55.945834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:56.862 BaseBdev3 00:15:56.862 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.862 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:56.862 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:56.862 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.862 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.862 BaseBdev4_malloc 00:15:56.862 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.862 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:56.862 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.863 11:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.863 [2024-11-05 11:31:55.997768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:56.863 [2024-11-05 11:31:55.997817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.863 [2024-11-05 11:31:55.997850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:56.863 [2024-11-05 11:31:55.997861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.863 [2024-11-05 11:31:55.999922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.863 [2024-11-05 11:31:56.000013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:56.863 BaseBdev4 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.863 spare_malloc 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.863 spare_delay 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.863 [2024-11-05 11:31:56.065525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:56.863 [2024-11-05 11:31:56.065577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.863 [2024-11-05 11:31:56.065596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:56.863 [2024-11-05 11:31:56.065605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.863 [2024-11-05 11:31:56.067554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.863 [2024-11-05 11:31:56.067584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:56.863 spare 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.863 [2024-11-05 11:31:56.077551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.863 [2024-11-05 11:31:56.079351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.863 [2024-11-05 11:31:56.079474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.863 [2024-11-05 11:31:56.079548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:56.863 [2024-11-05 11:31:56.079757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:56.863 [2024-11-05 11:31:56.079809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:56.863 [2024-11-05 11:31:56.080057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:56.863 [2024-11-05 11:31:56.080293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:56.863 [2024-11-05 11:31:56.080339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:56.863 [2024-11-05 11:31:56.080536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.863 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.122 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.122 "name": "raid_bdev1", 00:15:57.122 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:15:57.123 "strip_size_kb": 0, 00:15:57.123 "state": "online", 00:15:57.123 "raid_level": "raid1", 00:15:57.123 "superblock": true, 00:15:57.123 "num_base_bdevs": 4, 00:15:57.123 "num_base_bdevs_discovered": 4, 00:15:57.123 "num_base_bdevs_operational": 4, 00:15:57.123 "base_bdevs_list": [ 00:15:57.123 { 00:15:57.123 "name": "BaseBdev1", 00:15:57.123 "uuid": "53361471-d169-5e13-96a5-b46972f8aa62", 00:15:57.123 "is_configured": true, 00:15:57.123 "data_offset": 2048, 00:15:57.123 "data_size": 63488 00:15:57.123 }, 00:15:57.123 { 00:15:57.123 "name": "BaseBdev2", 00:15:57.123 "uuid": "f09ca200-822b-5a72-b1b0-652454be9f1d", 00:15:57.123 "is_configured": true, 00:15:57.123 "data_offset": 2048, 00:15:57.123 "data_size": 63488 00:15:57.123 }, 00:15:57.123 { 00:15:57.123 "name": "BaseBdev3", 00:15:57.123 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:15:57.123 "is_configured": true, 00:15:57.123 "data_offset": 2048, 00:15:57.123 "data_size": 63488 00:15:57.123 }, 00:15:57.123 { 00:15:57.123 "name": "BaseBdev4", 00:15:57.123 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:15:57.123 "is_configured": true, 00:15:57.123 "data_offset": 2048, 00:15:57.123 "data_size": 63488 00:15:57.123 } 00:15:57.123 ] 00:15:57.123 }' 00:15:57.123 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.123 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.382 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:57.382 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.382 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.382 [2024-11-05 11:31:56.493114] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.382 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.383 [2024-11-05 11:31:56.592613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.383 "name": "raid_bdev1", 00:15:57.383 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:15:57.383 "strip_size_kb": 0, 00:15:57.383 "state": "online", 00:15:57.383 "raid_level": "raid1", 00:15:57.383 "superblock": true, 00:15:57.383 "num_base_bdevs": 4, 00:15:57.383 "num_base_bdevs_discovered": 3, 00:15:57.383 "num_base_bdevs_operational": 3, 00:15:57.383 "base_bdevs_list": [ 00:15:57.383 { 00:15:57.383 "name": null, 00:15:57.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.383 "is_configured": false, 00:15:57.383 "data_offset": 0, 00:15:57.383 "data_size": 63488 00:15:57.383 }, 00:15:57.383 { 00:15:57.383 "name": "BaseBdev2", 00:15:57.383 "uuid": "f09ca200-822b-5a72-b1b0-652454be9f1d", 00:15:57.383 "is_configured": true, 00:15:57.383 "data_offset": 2048, 00:15:57.383 "data_size": 63488 00:15:57.383 }, 00:15:57.383 { 00:15:57.383 "name": "BaseBdev3", 00:15:57.383 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:15:57.383 "is_configured": true, 00:15:57.383 "data_offset": 2048, 00:15:57.383 "data_size": 63488 00:15:57.383 }, 00:15:57.383 { 00:15:57.383 "name": "BaseBdev4", 00:15:57.383 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:15:57.383 "is_configured": true, 00:15:57.383 "data_offset": 2048, 00:15:57.383 "data_size": 63488 00:15:57.383 } 00:15:57.383 ] 00:15:57.383 }' 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.383 11:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.642 [2024-11-05 11:31:56.688338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:57.642 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:57.642 Zero copy mechanism will not be used. 00:15:57.642 Running I/O for 60 seconds... 00:15:57.902 11:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.902 11:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.902 11:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.902 [2024-11-05 11:31:57.050100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.902 11:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.902 11:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:57.902 [2024-11-05 11:31:57.110323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:57.902 [2024-11-05 11:31:57.112332] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.162 [2024-11-05 11:31:57.220774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:58.162 [2024-11-05 11:31:57.221241] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:58.162 [2024-11-05 11:31:57.342548] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:58.162 [2024-11-05 11:31:57.343221] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:58.421 [2024-11-05 11:31:57.671712] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:58.683 165.00 IOPS, 495.00 MiB/s [2024-11-05T11:31:57.957Z] [2024-11-05 11:31:57.881834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:58.683 [2024-11-05 11:31:57.882204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:58.953 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.953 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.953 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.953 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.953 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.953 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.953 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.953 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.954 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.954 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.954 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.954 "name": "raid_bdev1", 00:15:58.954 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:15:58.954 "strip_size_kb": 0, 00:15:58.954 "state": "online", 00:15:58.954 "raid_level": "raid1", 00:15:58.954 "superblock": true, 00:15:58.954 "num_base_bdevs": 4, 00:15:58.954 "num_base_bdevs_discovered": 4, 00:15:58.954 "num_base_bdevs_operational": 4, 00:15:58.954 "process": { 00:15:58.954 "type": "rebuild", 00:15:58.954 "target": "spare", 00:15:58.954 "progress": { 00:15:58.954 "blocks": 10240, 00:15:58.954 "percent": 16 00:15:58.954 } 00:15:58.954 }, 00:15:58.954 "base_bdevs_list": [ 00:15:58.954 { 00:15:58.954 "name": "spare", 00:15:58.954 "uuid": "e3a20791-403e-5951-909a-a82697dadecf", 00:15:58.954 "is_configured": true, 00:15:58.954 "data_offset": 2048, 00:15:58.954 "data_size": 63488 00:15:58.954 }, 00:15:58.954 { 00:15:58.954 "name": "BaseBdev2", 00:15:58.954 "uuid": "f09ca200-822b-5a72-b1b0-652454be9f1d", 00:15:58.954 "is_configured": true, 00:15:58.954 "data_offset": 2048, 00:15:58.954 "data_size": 63488 00:15:58.954 }, 00:15:58.954 { 00:15:58.954 "name": "BaseBdev3", 00:15:58.954 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:15:58.954 "is_configured": true, 00:15:58.954 "data_offset": 2048, 00:15:58.954 "data_size": 63488 00:15:58.954 }, 00:15:58.954 { 00:15:58.954 "name": "BaseBdev4", 00:15:58.954 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:15:58.954 "is_configured": true, 00:15:58.954 "data_offset": 2048, 00:15:58.954 "data_size": 63488 00:15:58.954 } 00:15:58.954 ] 00:15:58.954 }' 00:15:58.954 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.954 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.954 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.954 [2024-11-05 11:31:58.210900] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:59.237 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.237 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:59.237 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.237 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.237 [2024-11-05 11:31:58.242922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:59.237 [2024-11-05 11:31:58.328475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:59.237 [2024-11-05 11:31:58.329873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:59.237 [2024-11-05 11:31:58.442249] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:59.237 [2024-11-05 11:31:58.453121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.237 [2024-11-05 11:31:58.453175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:59.237 [2024-11-05 11:31:58.453190] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:59.237 [2024-11-05 11:31:58.492330] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.516 "name": "raid_bdev1", 00:15:59.516 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:15:59.516 "strip_size_kb": 0, 00:15:59.516 "state": "online", 00:15:59.516 "raid_level": "raid1", 00:15:59.516 "superblock": true, 00:15:59.516 "num_base_bdevs": 4, 00:15:59.516 "num_base_bdevs_discovered": 3, 00:15:59.516 "num_base_bdevs_operational": 3, 00:15:59.516 "base_bdevs_list": [ 00:15:59.516 { 00:15:59.516 "name": null, 00:15:59.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.516 "is_configured": false, 00:15:59.516 "data_offset": 0, 00:15:59.516 "data_size": 63488 00:15:59.516 }, 00:15:59.516 { 00:15:59.516 "name": "BaseBdev2", 00:15:59.516 "uuid": "f09ca200-822b-5a72-b1b0-652454be9f1d", 00:15:59.516 "is_configured": true, 00:15:59.516 "data_offset": 2048, 00:15:59.516 "data_size": 63488 00:15:59.516 }, 00:15:59.516 { 00:15:59.516 "name": "BaseBdev3", 00:15:59.516 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:15:59.516 "is_configured": true, 00:15:59.516 "data_offset": 2048, 00:15:59.516 "data_size": 63488 00:15:59.516 }, 00:15:59.516 { 00:15:59.516 "name": "BaseBdev4", 00:15:59.516 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:15:59.516 "is_configured": true, 00:15:59.516 "data_offset": 2048, 00:15:59.516 "data_size": 63488 00:15:59.516 } 00:15:59.516 ] 00:15:59.516 }' 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.516 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.776 126.00 IOPS, 378.00 MiB/s [2024-11-05T11:31:59.050Z] 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.776 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.776 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.776 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.776 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.776 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.776 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.776 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.776 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.776 11:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.776 11:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.776 "name": "raid_bdev1", 00:15:59.776 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:15:59.776 "strip_size_kb": 0, 00:15:59.776 "state": "online", 00:15:59.776 "raid_level": "raid1", 00:15:59.776 "superblock": true, 00:15:59.776 "num_base_bdevs": 4, 00:15:59.776 "num_base_bdevs_discovered": 3, 00:15:59.776 "num_base_bdevs_operational": 3, 00:15:59.776 "base_bdevs_list": [ 00:15:59.776 { 00:15:59.776 "name": null, 00:15:59.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.776 "is_configured": false, 00:15:59.776 "data_offset": 0, 00:15:59.776 "data_size": 63488 00:15:59.776 }, 00:15:59.776 { 00:15:59.776 "name": "BaseBdev2", 00:15:59.776 "uuid": "f09ca200-822b-5a72-b1b0-652454be9f1d", 00:15:59.776 "is_configured": true, 00:15:59.776 "data_offset": 2048, 00:15:59.776 "data_size": 63488 00:15:59.776 }, 00:15:59.776 { 00:15:59.776 "name": "BaseBdev3", 00:15:59.776 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:15:59.776 "is_configured": true, 00:15:59.776 "data_offset": 2048, 00:15:59.776 "data_size": 63488 00:15:59.776 }, 00:15:59.776 { 00:15:59.776 "name": "BaseBdev4", 00:15:59.776 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:15:59.776 "is_configured": true, 00:15:59.776 "data_offset": 2048, 00:15:59.776 "data_size": 63488 00:15:59.776 } 00:15:59.776 ] 00:15:59.776 }' 00:15:59.776 11:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.776 11:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:00.036 11:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.036 11:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:00.036 11:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:00.036 11:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.036 11:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.036 [2024-11-05 11:31:59.109505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.036 11:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.036 11:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:00.037 [2024-11-05 11:31:59.167611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:00.037 [2024-11-05 11:31:59.169539] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:00.037 [2024-11-05 11:31:59.284814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:00.037 [2024-11-05 11:31:59.285491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:00.296 [2024-11-05 11:31:59.422099] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:00.296 [2024-11-05 11:31:59.422507] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:00.555 154.00 IOPS, 462.00 MiB/s [2024-11-05T11:31:59.829Z] [2024-11-05 11:31:59.762807] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:00.555 [2024-11-05 11:31:59.764183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:00.814 [2024-11-05 11:31:59.975309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.073 "name": "raid_bdev1", 00:16:01.073 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:01.073 "strip_size_kb": 0, 00:16:01.073 "state": "online", 00:16:01.073 "raid_level": "raid1", 00:16:01.073 "superblock": true, 00:16:01.073 "num_base_bdevs": 4, 00:16:01.073 "num_base_bdevs_discovered": 4, 00:16:01.073 "num_base_bdevs_operational": 4, 00:16:01.073 "process": { 00:16:01.073 "type": "rebuild", 00:16:01.073 "target": "spare", 00:16:01.073 "progress": { 00:16:01.073 "blocks": 10240, 00:16:01.073 "percent": 16 00:16:01.073 } 00:16:01.073 }, 00:16:01.073 "base_bdevs_list": [ 00:16:01.073 { 00:16:01.073 "name": "spare", 00:16:01.073 "uuid": "e3a20791-403e-5951-909a-a82697dadecf", 00:16:01.073 "is_configured": true, 00:16:01.073 "data_offset": 2048, 00:16:01.073 "data_size": 63488 00:16:01.073 }, 00:16:01.073 { 00:16:01.073 "name": "BaseBdev2", 00:16:01.073 "uuid": "f09ca200-822b-5a72-b1b0-652454be9f1d", 00:16:01.073 "is_configured": true, 00:16:01.073 "data_offset": 2048, 00:16:01.073 "data_size": 63488 00:16:01.073 }, 00:16:01.073 { 00:16:01.073 "name": "BaseBdev3", 00:16:01.073 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:01.073 "is_configured": true, 00:16:01.073 "data_offset": 2048, 00:16:01.073 "data_size": 63488 00:16:01.073 }, 00:16:01.073 { 00:16:01.073 "name": "BaseBdev4", 00:16:01.073 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:01.073 "is_configured": true, 00:16:01.073 "data_offset": 2048, 00:16:01.073 "data_size": 63488 00:16:01.073 } 00:16:01.073 ] 00:16:01.073 }' 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:01.073 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:01.074 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:01.074 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:01.074 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:01.074 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:01.074 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:01.074 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.074 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.074 [2024-11-05 11:32:00.302412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:01.074 [2024-11-05 11:32:00.303615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:01.333 [2024-11-05 11:32:00.404557] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:01.333 [2024-11-05 11:32:00.404621] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.333 "name": "raid_bdev1", 00:16:01.333 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:01.333 "strip_size_kb": 0, 00:16:01.333 "state": "online", 00:16:01.333 "raid_level": "raid1", 00:16:01.333 "superblock": true, 00:16:01.333 "num_base_bdevs": 4, 00:16:01.333 "num_base_bdevs_discovered": 3, 00:16:01.333 "num_base_bdevs_operational": 3, 00:16:01.333 "process": { 00:16:01.333 "type": "rebuild", 00:16:01.333 "target": "spare", 00:16:01.333 "progress": { 00:16:01.333 "blocks": 14336, 00:16:01.333 "percent": 22 00:16:01.333 } 00:16:01.333 }, 00:16:01.333 "base_bdevs_list": [ 00:16:01.333 { 00:16:01.333 "name": "spare", 00:16:01.333 "uuid": "e3a20791-403e-5951-909a-a82697dadecf", 00:16:01.333 "is_configured": true, 00:16:01.333 "data_offset": 2048, 00:16:01.333 "data_size": 63488 00:16:01.333 }, 00:16:01.333 { 00:16:01.333 "name": null, 00:16:01.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.333 "is_configured": false, 00:16:01.333 "data_offset": 0, 00:16:01.333 "data_size": 63488 00:16:01.333 }, 00:16:01.333 { 00:16:01.333 "name": "BaseBdev3", 00:16:01.333 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:01.333 "is_configured": true, 00:16:01.333 "data_offset": 2048, 00:16:01.333 "data_size": 63488 00:16:01.333 }, 00:16:01.333 { 00:16:01.333 "name": "BaseBdev4", 00:16:01.333 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:01.333 "is_configured": true, 00:16:01.333 "data_offset": 2048, 00:16:01.333 "data_size": 63488 00:16:01.333 } 00:16:01.333 ] 00:16:01.333 }' 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.333 [2024-11-05 11:32:00.527973] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=490 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.333 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.593 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.593 "name": "raid_bdev1", 00:16:01.593 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:01.593 "strip_size_kb": 0, 00:16:01.593 "state": "online", 00:16:01.593 "raid_level": "raid1", 00:16:01.593 "superblock": true, 00:16:01.593 "num_base_bdevs": 4, 00:16:01.593 "num_base_bdevs_discovered": 3, 00:16:01.593 "num_base_bdevs_operational": 3, 00:16:01.593 "process": { 00:16:01.593 "type": "rebuild", 00:16:01.593 "target": "spare", 00:16:01.593 "progress": { 00:16:01.593 "blocks": 16384, 00:16:01.593 "percent": 25 00:16:01.593 } 00:16:01.593 }, 00:16:01.593 "base_bdevs_list": [ 00:16:01.593 { 00:16:01.593 "name": "spare", 00:16:01.593 "uuid": "e3a20791-403e-5951-909a-a82697dadecf", 00:16:01.593 "is_configured": true, 00:16:01.593 "data_offset": 2048, 00:16:01.593 "data_size": 63488 00:16:01.593 }, 00:16:01.593 { 00:16:01.593 "name": null, 00:16:01.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.593 "is_configured": false, 00:16:01.593 "data_offset": 0, 00:16:01.593 "data_size": 63488 00:16:01.593 }, 00:16:01.593 { 00:16:01.593 "name": "BaseBdev3", 00:16:01.593 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:01.593 "is_configured": true, 00:16:01.593 "data_offset": 2048, 00:16:01.593 "data_size": 63488 00:16:01.593 }, 00:16:01.593 { 00:16:01.593 "name": "BaseBdev4", 00:16:01.593 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:01.593 "is_configured": true, 00:16:01.593 "data_offset": 2048, 00:16:01.593 "data_size": 63488 00:16:01.593 } 00:16:01.593 ] 00:16:01.593 }' 00:16:01.593 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.593 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.593 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.593 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.593 11:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.853 133.50 IOPS, 400.50 MiB/s [2024-11-05T11:32:01.127Z] [2024-11-05 11:32:01.048309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:02.113 [2024-11-05 11:32:01.154703] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:02.372 [2024-11-05 11:32:01.483563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.632 117.20 IOPS, 351.60 MiB/s [2024-11-05T11:32:01.906Z] [2024-11-05 11:32:01.700532] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.632 "name": "raid_bdev1", 00:16:02.632 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:02.632 "strip_size_kb": 0, 00:16:02.632 "state": "online", 00:16:02.632 "raid_level": "raid1", 00:16:02.632 "superblock": true, 00:16:02.632 "num_base_bdevs": 4, 00:16:02.632 "num_base_bdevs_discovered": 3, 00:16:02.632 "num_base_bdevs_operational": 3, 00:16:02.632 "process": { 00:16:02.632 "type": "rebuild", 00:16:02.632 "target": "spare", 00:16:02.632 "progress": { 00:16:02.632 "blocks": 34816, 00:16:02.632 "percent": 54 00:16:02.632 } 00:16:02.632 }, 00:16:02.632 "base_bdevs_list": [ 00:16:02.632 { 00:16:02.632 "name": "spare", 00:16:02.632 "uuid": "e3a20791-403e-5951-909a-a82697dadecf", 00:16:02.632 "is_configured": true, 00:16:02.632 "data_offset": 2048, 00:16:02.632 "data_size": 63488 00:16:02.632 }, 00:16:02.632 { 00:16:02.632 "name": null, 00:16:02.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.632 "is_configured": false, 00:16:02.632 "data_offset": 0, 00:16:02.632 "data_size": 63488 00:16:02.632 }, 00:16:02.632 { 00:16:02.632 "name": "BaseBdev3", 00:16:02.632 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:02.632 "is_configured": true, 00:16:02.632 "data_offset": 2048, 00:16:02.632 "data_size": 63488 00:16:02.632 }, 00:16:02.632 { 00:16:02.632 "name": "BaseBdev4", 00:16:02.632 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:02.632 "is_configured": true, 00:16:02.632 "data_offset": 2048, 00:16:02.632 "data_size": 63488 00:16:02.632 } 00:16:02.632 ] 00:16:02.632 }' 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.632 11:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.891 [2024-11-05 11:32:01.938159] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:02.891 [2024-11-05 11:32:02.045165] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:03.718 105.83 IOPS, 317.50 MiB/s [2024-11-05T11:32:02.992Z] [2024-11-05 11:32:02.836444] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.718 "name": "raid_bdev1", 00:16:03.718 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:03.718 "strip_size_kb": 0, 00:16:03.718 "state": "online", 00:16:03.718 "raid_level": "raid1", 00:16:03.718 "superblock": true, 00:16:03.718 "num_base_bdevs": 4, 00:16:03.718 "num_base_bdevs_discovered": 3, 00:16:03.718 "num_base_bdevs_operational": 3, 00:16:03.718 "process": { 00:16:03.718 "type": "rebuild", 00:16:03.718 "target": "spare", 00:16:03.718 "progress": { 00:16:03.718 "blocks": 53248, 00:16:03.718 "percent": 83 00:16:03.718 } 00:16:03.718 }, 00:16:03.718 "base_bdevs_list": [ 00:16:03.718 { 00:16:03.718 "name": "spare", 00:16:03.718 "uuid": "e3a20791-403e-5951-909a-a82697dadecf", 00:16:03.718 "is_configured": true, 00:16:03.718 "data_offset": 2048, 00:16:03.718 "data_size": 63488 00:16:03.718 }, 00:16:03.718 { 00:16:03.718 "name": null, 00:16:03.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.718 "is_configured": false, 00:16:03.718 "data_offset": 0, 00:16:03.718 "data_size": 63488 00:16:03.718 }, 00:16:03.718 { 00:16:03.718 "name": "BaseBdev3", 00:16:03.718 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:03.718 "is_configured": true, 00:16:03.718 "data_offset": 2048, 00:16:03.718 "data_size": 63488 00:16:03.718 }, 00:16:03.718 { 00:16:03.718 "name": "BaseBdev4", 00:16:03.718 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:03.718 "is_configured": true, 00:16:03.718 "data_offset": 2048, 00:16:03.718 "data_size": 63488 00:16:03.718 } 00:16:03.718 ] 00:16:03.718 }' 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.718 11:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.977 [2024-11-05 11:32:03.170983] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:16:04.236 [2024-11-05 11:32:03.373196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:04.236 [2024-11-05 11:32:03.373608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:04.496 95.00 IOPS, 285.00 MiB/s [2024-11-05T11:32:03.770Z] [2024-11-05 11:32:03.705213] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:04.756 [2024-11-05 11:32:03.810113] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:04.756 [2024-11-05 11:32:03.813767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.756 11:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.756 11:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.756 11:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.756 11:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.756 11:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.756 11:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.756 11:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.756 11:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.756 11:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.756 11:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.756 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.016 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.016 "name": "raid_bdev1", 00:16:05.016 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:05.016 "strip_size_kb": 0, 00:16:05.016 "state": "online", 00:16:05.016 "raid_level": "raid1", 00:16:05.016 "superblock": true, 00:16:05.016 "num_base_bdevs": 4, 00:16:05.016 "num_base_bdevs_discovered": 3, 00:16:05.016 "num_base_bdevs_operational": 3, 00:16:05.017 "base_bdevs_list": [ 00:16:05.017 { 00:16:05.017 "name": "spare", 00:16:05.017 "uuid": "e3a20791-403e-5951-909a-a82697dadecf", 00:16:05.017 "is_configured": true, 00:16:05.017 "data_offset": 2048, 00:16:05.017 "data_size": 63488 00:16:05.017 }, 00:16:05.017 { 00:16:05.017 "name": null, 00:16:05.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.017 "is_configured": false, 00:16:05.017 "data_offset": 0, 00:16:05.017 "data_size": 63488 00:16:05.017 }, 00:16:05.017 { 00:16:05.017 "name": "BaseBdev3", 00:16:05.017 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:05.017 "is_configured": true, 00:16:05.017 "data_offset": 2048, 00:16:05.017 "data_size": 63488 00:16:05.017 }, 00:16:05.017 { 00:16:05.017 "name": "BaseBdev4", 00:16:05.017 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:05.017 "is_configured": true, 00:16:05.017 "data_offset": 2048, 00:16:05.017 "data_size": 63488 00:16:05.017 } 00:16:05.017 ] 00:16:05.017 }' 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.017 "name": "raid_bdev1", 00:16:05.017 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:05.017 "strip_size_kb": 0, 00:16:05.017 "state": "online", 00:16:05.017 "raid_level": "raid1", 00:16:05.017 "superblock": true, 00:16:05.017 "num_base_bdevs": 4, 00:16:05.017 "num_base_bdevs_discovered": 3, 00:16:05.017 "num_base_bdevs_operational": 3, 00:16:05.017 "base_bdevs_list": [ 00:16:05.017 { 00:16:05.017 "name": "spare", 00:16:05.017 "uuid": "e3a20791-403e-5951-909a-a82697dadecf", 00:16:05.017 "is_configured": true, 00:16:05.017 "data_offset": 2048, 00:16:05.017 "data_size": 63488 00:16:05.017 }, 00:16:05.017 { 00:16:05.017 "name": null, 00:16:05.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.017 "is_configured": false, 00:16:05.017 "data_offset": 0, 00:16:05.017 "data_size": 63488 00:16:05.017 }, 00:16:05.017 { 00:16:05.017 "name": "BaseBdev3", 00:16:05.017 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:05.017 "is_configured": true, 00:16:05.017 "data_offset": 2048, 00:16:05.017 "data_size": 63488 00:16:05.017 }, 00:16:05.017 { 00:16:05.017 "name": "BaseBdev4", 00:16:05.017 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:05.017 "is_configured": true, 00:16:05.017 "data_offset": 2048, 00:16:05.017 "data_size": 63488 00:16:05.017 } 00:16:05.017 ] 00:16:05.017 }' 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.017 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.277 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.277 "name": "raid_bdev1", 00:16:05.277 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:05.277 "strip_size_kb": 0, 00:16:05.277 "state": "online", 00:16:05.277 "raid_level": "raid1", 00:16:05.277 "superblock": true, 00:16:05.277 "num_base_bdevs": 4, 00:16:05.277 "num_base_bdevs_discovered": 3, 00:16:05.277 "num_base_bdevs_operational": 3, 00:16:05.277 "base_bdevs_list": [ 00:16:05.277 { 00:16:05.277 "name": "spare", 00:16:05.277 "uuid": "e3a20791-403e-5951-909a-a82697dadecf", 00:16:05.277 "is_configured": true, 00:16:05.277 "data_offset": 2048, 00:16:05.277 "data_size": 63488 00:16:05.277 }, 00:16:05.277 { 00:16:05.277 "name": null, 00:16:05.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.277 "is_configured": false, 00:16:05.277 "data_offset": 0, 00:16:05.277 "data_size": 63488 00:16:05.277 }, 00:16:05.277 { 00:16:05.277 "name": "BaseBdev3", 00:16:05.277 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:05.277 "is_configured": true, 00:16:05.277 "data_offset": 2048, 00:16:05.277 "data_size": 63488 00:16:05.277 }, 00:16:05.277 { 00:16:05.277 "name": "BaseBdev4", 00:16:05.277 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:05.277 "is_configured": true, 00:16:05.277 "data_offset": 2048, 00:16:05.277 "data_size": 63488 00:16:05.277 } 00:16:05.277 ] 00:16:05.277 }' 00:16:05.277 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.277 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.537 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.537 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.537 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.537 [2024-11-05 11:32:04.682540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.537 [2024-11-05 11:32:04.682572] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.537 87.88 IOPS, 263.62 MiB/s 00:16:05.537 Latency(us) 00:16:05.537 [2024-11-05T11:32:04.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.537 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:05.537 raid_bdev1 : 8.11 86.96 260.87 0.00 0.00 14902.06 298.70 117220.72 00:16:05.537 [2024-11-05T11:32:04.811Z] =================================================================================================================== 00:16:05.537 [2024-11-05T11:32:04.811Z] Total : 86.96 260.87 0.00 0.00 14902.06 298.70 117220.72 00:16:05.537 [2024-11-05 11:32:04.802225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.537 [2024-11-05 11:32:04.802300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.537 [2024-11-05 11:32:04.802414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.537 [2024-11-05 11:32:04.802478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:05.537 { 00:16:05.537 "results": [ 00:16:05.537 { 00:16:05.537 "job": "raid_bdev1", 00:16:05.537 "core_mask": "0x1", 00:16:05.537 "workload": "randrw", 00:16:05.537 "percentage": 50, 00:16:05.537 "status": "finished", 00:16:05.537 "queue_depth": 2, 00:16:05.537 "io_size": 3145728, 00:16:05.537 "runtime": 8.107364, 00:16:05.537 "iops": 86.95798042372341, 00:16:05.537 "mibps": 260.8739412711702, 00:16:05.537 "io_failed": 0, 00:16:05.537 "io_timeout": 0, 00:16:05.537 "avg_latency_us": 14902.055808479667, 00:16:05.537 "min_latency_us": 298.70393013100437, 00:16:05.537 "max_latency_us": 117220.7231441048 00:16:05.537 } 00:16:05.537 ], 00:16:05.537 "core_count": 1 00:16:05.537 } 00:16:05.537 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.537 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.796 11:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:05.796 /dev/nbd0 00:16:05.796 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:05.796 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:05.796 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:05.796 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:16:05.796 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:05.796 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:05.796 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:05.796 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:16:05.796 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:05.796 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:05.796 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.055 1+0 records in 00:16:06.055 1+0 records out 00:16:06.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489643 s, 8.4 MB/s 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:06.055 /dev/nbd1 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:06.055 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.315 1+0 records in 00:16:06.315 1+0 records out 00:16:06.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377727 s, 10.8 MB/s 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.315 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.574 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:06.833 /dev/nbd1 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.834 1+0 records in 00:16:06.834 1+0 records out 00:16:06.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232709 s, 17.6 MB/s 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.834 11:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:06.834 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:06.834 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.834 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:06.834 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.834 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:06.834 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.834 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:07.094 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:07.094 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:07.094 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:07.094 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.094 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.094 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:07.094 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:07.094 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.094 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:07.094 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.094 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:07.094 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.094 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:07.094 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.094 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.354 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.354 [2024-11-05 11:32:06.514562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:07.355 [2024-11-05 11:32:06.514627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.355 [2024-11-05 11:32:06.514647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:07.355 [2024-11-05 11:32:06.514656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.355 [2024-11-05 11:32:06.516825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.355 [2024-11-05 11:32:06.516863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:07.355 [2024-11-05 11:32:06.516953] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:07.355 [2024-11-05 11:32:06.517005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.355 [2024-11-05 11:32:06.517170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.355 [2024-11-05 11:32:06.517259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:07.355 spare 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.355 [2024-11-05 11:32:06.617155] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:07.355 [2024-11-05 11:32:06.617180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:07.355 [2024-11-05 11:32:06.617454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:07.355 [2024-11-05 11:32:06.617639] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:07.355 [2024-11-05 11:32:06.617657] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:07.355 [2024-11-05 11:32:06.617812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.355 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.614 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.614 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.614 "name": "raid_bdev1", 00:16:07.614 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:07.614 "strip_size_kb": 0, 00:16:07.614 "state": "online", 00:16:07.614 "raid_level": "raid1", 00:16:07.614 "superblock": true, 00:16:07.614 "num_base_bdevs": 4, 00:16:07.614 "num_base_bdevs_discovered": 3, 00:16:07.614 "num_base_bdevs_operational": 3, 00:16:07.614 "base_bdevs_list": [ 00:16:07.614 { 00:16:07.614 "name": "spare", 00:16:07.614 "uuid": "e3a20791-403e-5951-909a-a82697dadecf", 00:16:07.614 "is_configured": true, 00:16:07.614 "data_offset": 2048, 00:16:07.614 "data_size": 63488 00:16:07.614 }, 00:16:07.614 { 00:16:07.614 "name": null, 00:16:07.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.614 "is_configured": false, 00:16:07.614 "data_offset": 2048, 00:16:07.614 "data_size": 63488 00:16:07.614 }, 00:16:07.614 { 00:16:07.614 "name": "BaseBdev3", 00:16:07.614 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:07.614 "is_configured": true, 00:16:07.614 "data_offset": 2048, 00:16:07.614 "data_size": 63488 00:16:07.614 }, 00:16:07.614 { 00:16:07.614 "name": "BaseBdev4", 00:16:07.614 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:07.615 "is_configured": true, 00:16:07.615 "data_offset": 2048, 00:16:07.615 "data_size": 63488 00:16:07.615 } 00:16:07.615 ] 00:16:07.615 }' 00:16:07.615 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.615 11:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.874 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.874 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.874 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.874 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.874 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.874 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.874 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.874 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.874 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.874 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.874 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.874 "name": "raid_bdev1", 00:16:07.874 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:07.874 "strip_size_kb": 0, 00:16:07.874 "state": "online", 00:16:07.874 "raid_level": "raid1", 00:16:07.874 "superblock": true, 00:16:07.874 "num_base_bdevs": 4, 00:16:07.874 "num_base_bdevs_discovered": 3, 00:16:07.874 "num_base_bdevs_operational": 3, 00:16:07.874 "base_bdevs_list": [ 00:16:07.874 { 00:16:07.874 "name": "spare", 00:16:07.874 "uuid": "e3a20791-403e-5951-909a-a82697dadecf", 00:16:07.874 "is_configured": true, 00:16:07.874 "data_offset": 2048, 00:16:07.874 "data_size": 63488 00:16:07.874 }, 00:16:07.874 { 00:16:07.874 "name": null, 00:16:07.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.874 "is_configured": false, 00:16:07.874 "data_offset": 2048, 00:16:07.874 "data_size": 63488 00:16:07.874 }, 00:16:07.874 { 00:16:07.874 "name": "BaseBdev3", 00:16:07.874 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:07.874 "is_configured": true, 00:16:07.874 "data_offset": 2048, 00:16:07.874 "data_size": 63488 00:16:07.874 }, 00:16:07.874 { 00:16:07.874 "name": "BaseBdev4", 00:16:07.874 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:07.874 "is_configured": true, 00:16:07.874 "data_offset": 2048, 00:16:07.874 "data_size": 63488 00:16:07.874 } 00:16:07.874 ] 00:16:07.874 }' 00:16:07.874 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.874 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.874 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.133 [2024-11-05 11:32:07.221500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.133 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.134 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.134 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.134 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.134 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.134 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.134 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.134 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.134 "name": "raid_bdev1", 00:16:08.134 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:08.134 "strip_size_kb": 0, 00:16:08.134 "state": "online", 00:16:08.134 "raid_level": "raid1", 00:16:08.134 "superblock": true, 00:16:08.134 "num_base_bdevs": 4, 00:16:08.134 "num_base_bdevs_discovered": 2, 00:16:08.134 "num_base_bdevs_operational": 2, 00:16:08.134 "base_bdevs_list": [ 00:16:08.134 { 00:16:08.134 "name": null, 00:16:08.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.134 "is_configured": false, 00:16:08.134 "data_offset": 0, 00:16:08.134 "data_size": 63488 00:16:08.134 }, 00:16:08.134 { 00:16:08.134 "name": null, 00:16:08.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.134 "is_configured": false, 00:16:08.134 "data_offset": 2048, 00:16:08.134 "data_size": 63488 00:16:08.134 }, 00:16:08.134 { 00:16:08.134 "name": "BaseBdev3", 00:16:08.134 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:08.134 "is_configured": true, 00:16:08.134 "data_offset": 2048, 00:16:08.134 "data_size": 63488 00:16:08.134 }, 00:16:08.134 { 00:16:08.134 "name": "BaseBdev4", 00:16:08.134 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:08.134 "is_configured": true, 00:16:08.134 "data_offset": 2048, 00:16:08.134 "data_size": 63488 00:16:08.134 } 00:16:08.134 ] 00:16:08.134 }' 00:16:08.134 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.134 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.392 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:08.392 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.392 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.393 [2024-11-05 11:32:07.632880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.393 [2024-11-05 11:32:07.633095] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:08.393 [2024-11-05 11:32:07.633118] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:08.393 [2024-11-05 11:32:07.633166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.393 [2024-11-05 11:32:07.647259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:16:08.393 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.393 11:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:08.393 [2024-11-05 11:32:07.649005] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.772 "name": "raid_bdev1", 00:16:09.772 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:09.772 "strip_size_kb": 0, 00:16:09.772 "state": "online", 00:16:09.772 "raid_level": "raid1", 00:16:09.772 "superblock": true, 00:16:09.772 "num_base_bdevs": 4, 00:16:09.772 "num_base_bdevs_discovered": 3, 00:16:09.772 "num_base_bdevs_operational": 3, 00:16:09.772 "process": { 00:16:09.772 "type": "rebuild", 00:16:09.772 "target": "spare", 00:16:09.772 "progress": { 00:16:09.772 "blocks": 20480, 00:16:09.772 "percent": 32 00:16:09.772 } 00:16:09.772 }, 00:16:09.772 "base_bdevs_list": [ 00:16:09.772 { 00:16:09.772 "name": "spare", 00:16:09.772 "uuid": "e3a20791-403e-5951-909a-a82697dadecf", 00:16:09.772 "is_configured": true, 00:16:09.772 "data_offset": 2048, 00:16:09.772 "data_size": 63488 00:16:09.772 }, 00:16:09.772 { 00:16:09.772 "name": null, 00:16:09.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.772 "is_configured": false, 00:16:09.772 "data_offset": 2048, 00:16:09.772 "data_size": 63488 00:16:09.772 }, 00:16:09.772 { 00:16:09.772 "name": "BaseBdev3", 00:16:09.772 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:09.772 "is_configured": true, 00:16:09.772 "data_offset": 2048, 00:16:09.772 "data_size": 63488 00:16:09.772 }, 00:16:09.772 { 00:16:09.772 "name": "BaseBdev4", 00:16:09.772 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:09.772 "is_configured": true, 00:16:09.772 "data_offset": 2048, 00:16:09.772 "data_size": 63488 00:16:09.772 } 00:16:09.772 ] 00:16:09.772 }' 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.772 [2024-11-05 11:32:08.808800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.772 [2024-11-05 11:32:08.853668] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:09.772 [2024-11-05 11:32:08.853722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.772 [2024-11-05 11:32:08.853756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.772 [2024-11-05 11:32:08.853763] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.772 "name": "raid_bdev1", 00:16:09.772 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:09.772 "strip_size_kb": 0, 00:16:09.772 "state": "online", 00:16:09.772 "raid_level": "raid1", 00:16:09.772 "superblock": true, 00:16:09.772 "num_base_bdevs": 4, 00:16:09.772 "num_base_bdevs_discovered": 2, 00:16:09.772 "num_base_bdevs_operational": 2, 00:16:09.772 "base_bdevs_list": [ 00:16:09.772 { 00:16:09.772 "name": null, 00:16:09.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.772 "is_configured": false, 00:16:09.772 "data_offset": 0, 00:16:09.772 "data_size": 63488 00:16:09.772 }, 00:16:09.772 { 00:16:09.772 "name": null, 00:16:09.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.772 "is_configured": false, 00:16:09.772 "data_offset": 2048, 00:16:09.772 "data_size": 63488 00:16:09.772 }, 00:16:09.772 { 00:16:09.772 "name": "BaseBdev3", 00:16:09.772 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:09.772 "is_configured": true, 00:16:09.772 "data_offset": 2048, 00:16:09.772 "data_size": 63488 00:16:09.772 }, 00:16:09.772 { 00:16:09.772 "name": "BaseBdev4", 00:16:09.772 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:09.772 "is_configured": true, 00:16:09.772 "data_offset": 2048, 00:16:09.772 "data_size": 63488 00:16:09.772 } 00:16:09.772 ] 00:16:09.772 }' 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.772 11:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.342 11:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.342 11:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.342 11:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.342 [2024-11-05 11:32:09.341108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.342 [2024-11-05 11:32:09.341257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.342 [2024-11-05 11:32:09.341300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:10.342 [2024-11-05 11:32:09.341328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.342 [2024-11-05 11:32:09.341807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.342 [2024-11-05 11:32:09.341874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.342 [2024-11-05 11:32:09.341990] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:10.342 [2024-11-05 11:32:09.342030] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:10.342 [2024-11-05 11:32:09.342074] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:10.342 [2024-11-05 11:32:09.342143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.342 [2024-11-05 11:32:09.356557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:16:10.342 spare 00:16:10.342 11:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.342 [2024-11-05 11:32:09.358374] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.342 11:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:11.279 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.279 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.279 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.279 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.279 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.279 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.279 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.279 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.279 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.279 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.279 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.279 "name": "raid_bdev1", 00:16:11.279 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:11.279 "strip_size_kb": 0, 00:16:11.279 "state": "online", 00:16:11.279 "raid_level": "raid1", 00:16:11.279 "superblock": true, 00:16:11.279 "num_base_bdevs": 4, 00:16:11.279 "num_base_bdevs_discovered": 3, 00:16:11.279 "num_base_bdevs_operational": 3, 00:16:11.279 "process": { 00:16:11.279 "type": "rebuild", 00:16:11.279 "target": "spare", 00:16:11.279 "progress": { 00:16:11.279 "blocks": 20480, 00:16:11.279 "percent": 32 00:16:11.279 } 00:16:11.279 }, 00:16:11.279 "base_bdevs_list": [ 00:16:11.279 { 00:16:11.279 "name": "spare", 00:16:11.279 "uuid": "e3a20791-403e-5951-909a-a82697dadecf", 00:16:11.279 "is_configured": true, 00:16:11.279 "data_offset": 2048, 00:16:11.279 "data_size": 63488 00:16:11.279 }, 00:16:11.279 { 00:16:11.279 "name": null, 00:16:11.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.280 "is_configured": false, 00:16:11.280 "data_offset": 2048, 00:16:11.280 "data_size": 63488 00:16:11.280 }, 00:16:11.280 { 00:16:11.280 "name": "BaseBdev3", 00:16:11.280 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:11.280 "is_configured": true, 00:16:11.280 "data_offset": 2048, 00:16:11.280 "data_size": 63488 00:16:11.280 }, 00:16:11.280 { 00:16:11.280 "name": "BaseBdev4", 00:16:11.280 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:11.280 "is_configured": true, 00:16:11.280 "data_offset": 2048, 00:16:11.280 "data_size": 63488 00:16:11.280 } 00:16:11.280 ] 00:16:11.280 }' 00:16:11.280 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.280 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.280 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.280 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.280 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:11.280 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.280 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.280 [2024-11-05 11:32:10.521666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.539 [2024-11-05 11:32:10.563106] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:11.540 [2024-11-05 11:32:10.563193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.540 [2024-11-05 11:32:10.563210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.540 [2024-11-05 11:32:10.563219] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.540 "name": "raid_bdev1", 00:16:11.540 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:11.540 "strip_size_kb": 0, 00:16:11.540 "state": "online", 00:16:11.540 "raid_level": "raid1", 00:16:11.540 "superblock": true, 00:16:11.540 "num_base_bdevs": 4, 00:16:11.540 "num_base_bdevs_discovered": 2, 00:16:11.540 "num_base_bdevs_operational": 2, 00:16:11.540 "base_bdevs_list": [ 00:16:11.540 { 00:16:11.540 "name": null, 00:16:11.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.540 "is_configured": false, 00:16:11.540 "data_offset": 0, 00:16:11.540 "data_size": 63488 00:16:11.540 }, 00:16:11.540 { 00:16:11.540 "name": null, 00:16:11.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.540 "is_configured": false, 00:16:11.540 "data_offset": 2048, 00:16:11.540 "data_size": 63488 00:16:11.540 }, 00:16:11.540 { 00:16:11.540 "name": "BaseBdev3", 00:16:11.540 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:11.540 "is_configured": true, 00:16:11.540 "data_offset": 2048, 00:16:11.540 "data_size": 63488 00:16:11.540 }, 00:16:11.540 { 00:16:11.540 "name": "BaseBdev4", 00:16:11.540 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:11.540 "is_configured": true, 00:16:11.540 "data_offset": 2048, 00:16:11.540 "data_size": 63488 00:16:11.540 } 00:16:11.540 ] 00:16:11.540 }' 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.540 11:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.799 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.799 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.799 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.799 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.799 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.799 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.799 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.799 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.799 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.800 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.800 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.800 "name": "raid_bdev1", 00:16:11.800 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:11.800 "strip_size_kb": 0, 00:16:11.800 "state": "online", 00:16:11.800 "raid_level": "raid1", 00:16:11.800 "superblock": true, 00:16:11.800 "num_base_bdevs": 4, 00:16:11.800 "num_base_bdevs_discovered": 2, 00:16:11.800 "num_base_bdevs_operational": 2, 00:16:11.800 "base_bdevs_list": [ 00:16:11.800 { 00:16:11.800 "name": null, 00:16:11.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.800 "is_configured": false, 00:16:11.800 "data_offset": 0, 00:16:11.800 "data_size": 63488 00:16:11.800 }, 00:16:11.800 { 00:16:11.800 "name": null, 00:16:11.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.800 "is_configured": false, 00:16:11.800 "data_offset": 2048, 00:16:11.800 "data_size": 63488 00:16:11.800 }, 00:16:11.800 { 00:16:11.800 "name": "BaseBdev3", 00:16:11.800 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:11.800 "is_configured": true, 00:16:11.800 "data_offset": 2048, 00:16:11.800 "data_size": 63488 00:16:11.800 }, 00:16:11.800 { 00:16:11.800 "name": "BaseBdev4", 00:16:11.800 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:11.800 "is_configured": true, 00:16:11.800 "data_offset": 2048, 00:16:11.800 "data_size": 63488 00:16:11.800 } 00:16:11.800 ] 00:16:11.800 }' 00:16:11.800 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.059 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.059 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.059 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.059 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:12.059 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.059 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.059 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.059 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:12.059 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.059 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.059 [2024-11-05 11:32:11.167482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:12.059 [2024-11-05 11:32:11.167540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.059 [2024-11-05 11:32:11.167576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:12.059 [2024-11-05 11:32:11.167586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.059 [2024-11-05 11:32:11.168018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.059 [2024-11-05 11:32:11.168047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:12.059 [2024-11-05 11:32:11.168126] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:12.059 [2024-11-05 11:32:11.168161] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:12.059 [2024-11-05 11:32:11.168169] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:12.059 [2024-11-05 11:32:11.168184] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:12.059 BaseBdev1 00:16:12.059 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.059 11:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.016 "name": "raid_bdev1", 00:16:13.016 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:13.016 "strip_size_kb": 0, 00:16:13.016 "state": "online", 00:16:13.016 "raid_level": "raid1", 00:16:13.016 "superblock": true, 00:16:13.016 "num_base_bdevs": 4, 00:16:13.016 "num_base_bdevs_discovered": 2, 00:16:13.016 "num_base_bdevs_operational": 2, 00:16:13.016 "base_bdevs_list": [ 00:16:13.016 { 00:16:13.016 "name": null, 00:16:13.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.016 "is_configured": false, 00:16:13.016 "data_offset": 0, 00:16:13.016 "data_size": 63488 00:16:13.016 }, 00:16:13.016 { 00:16:13.016 "name": null, 00:16:13.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.016 "is_configured": false, 00:16:13.016 "data_offset": 2048, 00:16:13.016 "data_size": 63488 00:16:13.016 }, 00:16:13.016 { 00:16:13.016 "name": "BaseBdev3", 00:16:13.016 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:13.016 "is_configured": true, 00:16:13.016 "data_offset": 2048, 00:16:13.016 "data_size": 63488 00:16:13.016 }, 00:16:13.016 { 00:16:13.016 "name": "BaseBdev4", 00:16:13.016 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:13.016 "is_configured": true, 00:16:13.016 "data_offset": 2048, 00:16:13.016 "data_size": 63488 00:16:13.016 } 00:16:13.016 ] 00:16:13.016 }' 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.016 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.586 "name": "raid_bdev1", 00:16:13.586 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:13.586 "strip_size_kb": 0, 00:16:13.586 "state": "online", 00:16:13.586 "raid_level": "raid1", 00:16:13.586 "superblock": true, 00:16:13.586 "num_base_bdevs": 4, 00:16:13.586 "num_base_bdevs_discovered": 2, 00:16:13.586 "num_base_bdevs_operational": 2, 00:16:13.586 "base_bdevs_list": [ 00:16:13.586 { 00:16:13.586 "name": null, 00:16:13.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.586 "is_configured": false, 00:16:13.586 "data_offset": 0, 00:16:13.586 "data_size": 63488 00:16:13.586 }, 00:16:13.586 { 00:16:13.586 "name": null, 00:16:13.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.586 "is_configured": false, 00:16:13.586 "data_offset": 2048, 00:16:13.586 "data_size": 63488 00:16:13.586 }, 00:16:13.586 { 00:16:13.586 "name": "BaseBdev3", 00:16:13.586 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:13.586 "is_configured": true, 00:16:13.586 "data_offset": 2048, 00:16:13.586 "data_size": 63488 00:16:13.586 }, 00:16:13.586 { 00:16:13.586 "name": "BaseBdev4", 00:16:13.586 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:13.586 "is_configured": true, 00:16:13.586 "data_offset": 2048, 00:16:13.586 "data_size": 63488 00:16:13.586 } 00:16:13.586 ] 00:16:13.586 }' 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.586 [2024-11-05 11:32:12.741126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.586 [2024-11-05 11:32:12.741306] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:13.586 [2024-11-05 11:32:12.741327] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:13.586 request: 00:16:13.586 { 00:16:13.586 "base_bdev": "BaseBdev1", 00:16:13.586 "raid_bdev": "raid_bdev1", 00:16:13.586 "method": "bdev_raid_add_base_bdev", 00:16:13.586 "req_id": 1 00:16:13.586 } 00:16:13.586 Got JSON-RPC error response 00:16:13.586 response: 00:16:13.586 { 00:16:13.586 "code": -22, 00:16:13.586 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:13.586 } 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:13.586 11:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:14.526 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:14.526 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.526 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.526 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.526 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.526 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:14.526 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.526 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.526 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.526 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.526 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.526 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.526 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.526 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.526 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.785 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.785 "name": "raid_bdev1", 00:16:14.785 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:14.785 "strip_size_kb": 0, 00:16:14.785 "state": "online", 00:16:14.785 "raid_level": "raid1", 00:16:14.785 "superblock": true, 00:16:14.785 "num_base_bdevs": 4, 00:16:14.785 "num_base_bdevs_discovered": 2, 00:16:14.785 "num_base_bdevs_operational": 2, 00:16:14.785 "base_bdevs_list": [ 00:16:14.785 { 00:16:14.785 "name": null, 00:16:14.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.785 "is_configured": false, 00:16:14.785 "data_offset": 0, 00:16:14.785 "data_size": 63488 00:16:14.785 }, 00:16:14.785 { 00:16:14.785 "name": null, 00:16:14.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.785 "is_configured": false, 00:16:14.785 "data_offset": 2048, 00:16:14.785 "data_size": 63488 00:16:14.785 }, 00:16:14.785 { 00:16:14.785 "name": "BaseBdev3", 00:16:14.785 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:14.785 "is_configured": true, 00:16:14.785 "data_offset": 2048, 00:16:14.785 "data_size": 63488 00:16:14.785 }, 00:16:14.785 { 00:16:14.785 "name": "BaseBdev4", 00:16:14.785 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:14.785 "is_configured": true, 00:16:14.785 "data_offset": 2048, 00:16:14.785 "data_size": 63488 00:16:14.785 } 00:16:14.785 ] 00:16:14.785 }' 00:16:14.785 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.785 11:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.045 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.045 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.045 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.045 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.045 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.045 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.045 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.045 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.045 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.045 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.045 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.045 "name": "raid_bdev1", 00:16:15.045 "uuid": "4e019e96-ae0c-496c-b163-118eaaa0d7bc", 00:16:15.045 "strip_size_kb": 0, 00:16:15.045 "state": "online", 00:16:15.045 "raid_level": "raid1", 00:16:15.045 "superblock": true, 00:16:15.045 "num_base_bdevs": 4, 00:16:15.045 "num_base_bdevs_discovered": 2, 00:16:15.045 "num_base_bdevs_operational": 2, 00:16:15.045 "base_bdevs_list": [ 00:16:15.045 { 00:16:15.045 "name": null, 00:16:15.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.045 "is_configured": false, 00:16:15.045 "data_offset": 0, 00:16:15.045 "data_size": 63488 00:16:15.045 }, 00:16:15.045 { 00:16:15.045 "name": null, 00:16:15.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.045 "is_configured": false, 00:16:15.045 "data_offset": 2048, 00:16:15.045 "data_size": 63488 00:16:15.045 }, 00:16:15.045 { 00:16:15.045 "name": "BaseBdev3", 00:16:15.045 "uuid": "02f06995-36c2-50a3-af7b-409f18a47007", 00:16:15.045 "is_configured": true, 00:16:15.045 "data_offset": 2048, 00:16:15.045 "data_size": 63488 00:16:15.045 }, 00:16:15.045 { 00:16:15.045 "name": "BaseBdev4", 00:16:15.046 "uuid": "5b02e470-f6ec-5214-8ee4-9d8f9b250147", 00:16:15.046 "is_configured": true, 00:16:15.046 "data_offset": 2048, 00:16:15.046 "data_size": 63488 00:16:15.046 } 00:16:15.046 ] 00:16:15.046 }' 00:16:15.046 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.046 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.046 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.046 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.046 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79209 00:16:15.046 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 79209 ']' 00:16:15.046 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 79209 00:16:15.046 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:16:15.046 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:15.046 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79209 00:16:15.046 killing process with pid 79209 00:16:15.046 Received shutdown signal, test time was about 17.609090 seconds 00:16:15.046 00:16:15.046 Latency(us) 00:16:15.046 [2024-11-05T11:32:14.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.046 [2024-11-05T11:32:14.320Z] =================================================================================================================== 00:16:15.046 [2024-11-05T11:32:14.320Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:15.046 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:15.046 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:15.046 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79209' 00:16:15.046 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 79209 00:16:15.046 [2024-11-05 11:32:14.265648] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:15.046 [2024-11-05 11:32:14.265769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.046 11:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 79209 00:16:15.046 [2024-11-05 11:32:14.265837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.046 [2024-11-05 11:32:14.265847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:15.615 [2024-11-05 11:32:14.660117] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.555 ************************************ 00:16:16.555 END TEST raid_rebuild_test_sb_io 00:16:16.555 ************************************ 00:16:16.555 11:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:16.555 00:16:16.555 real 0m20.903s 00:16:16.555 user 0m27.147s 00:16:16.555 sys 0m2.563s 00:16:16.555 11:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:16.555 11:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.555 11:32:15 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:16.555 11:32:15 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:16.555 11:32:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:16.555 11:32:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:16.555 11:32:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.815 ************************************ 00:16:16.815 START TEST raid5f_state_function_test 00:16:16.815 ************************************ 00:16:16.815 11:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:16:16.815 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:16.815 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:16.815 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:16.815 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:16.815 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:16.815 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79935 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79935' 00:16:16.816 Process raid pid: 79935 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79935 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 79935 ']' 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:16.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:16.816 11:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.816 [2024-11-05 11:32:15.935626] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:16:16.816 [2024-11-05 11:32:15.935757] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.076 [2024-11-05 11:32:16.111501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.076 [2024-11-05 11:32:16.216043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.335 [2024-11-05 11:32:16.405011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.335 [2024-11-05 11:32:16.405061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.595 [2024-11-05 11:32:16.754440] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.595 [2024-11-05 11:32:16.754489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.595 [2024-11-05 11:32:16.754500] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.595 [2024-11-05 11:32:16.754510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.595 [2024-11-05 11:32:16.754516] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:17.595 [2024-11-05 11:32:16.754525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.595 "name": "Existed_Raid", 00:16:17.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.595 "strip_size_kb": 64, 00:16:17.595 "state": "configuring", 00:16:17.595 "raid_level": "raid5f", 00:16:17.595 "superblock": false, 00:16:17.595 "num_base_bdevs": 3, 00:16:17.595 "num_base_bdevs_discovered": 0, 00:16:17.595 "num_base_bdevs_operational": 3, 00:16:17.595 "base_bdevs_list": [ 00:16:17.595 { 00:16:17.595 "name": "BaseBdev1", 00:16:17.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.595 "is_configured": false, 00:16:17.595 "data_offset": 0, 00:16:17.595 "data_size": 0 00:16:17.595 }, 00:16:17.595 { 00:16:17.595 "name": "BaseBdev2", 00:16:17.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.595 "is_configured": false, 00:16:17.595 "data_offset": 0, 00:16:17.595 "data_size": 0 00:16:17.595 }, 00:16:17.595 { 00:16:17.595 "name": "BaseBdev3", 00:16:17.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.595 "is_configured": false, 00:16:17.595 "data_offset": 0, 00:16:17.595 "data_size": 0 00:16:17.595 } 00:16:17.595 ] 00:16:17.595 }' 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.595 11:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.165 [2024-11-05 11:32:17.213576] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.165 [2024-11-05 11:32:17.213612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.165 [2024-11-05 11:32:17.225561] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.165 [2024-11-05 11:32:17.225598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.165 [2024-11-05 11:32:17.225622] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.165 [2024-11-05 11:32:17.225630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.165 [2024-11-05 11:32:17.225636] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.165 [2024-11-05 11:32:17.225645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.165 [2024-11-05 11:32:17.267528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.165 BaseBdev1 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.165 [ 00:16:18.165 { 00:16:18.165 "name": "BaseBdev1", 00:16:18.165 "aliases": [ 00:16:18.165 "991bfc39-3248-4e0d-a9a7-7a52195c283f" 00:16:18.165 ], 00:16:18.165 "product_name": "Malloc disk", 00:16:18.165 "block_size": 512, 00:16:18.165 "num_blocks": 65536, 00:16:18.165 "uuid": "991bfc39-3248-4e0d-a9a7-7a52195c283f", 00:16:18.165 "assigned_rate_limits": { 00:16:18.165 "rw_ios_per_sec": 0, 00:16:18.165 "rw_mbytes_per_sec": 0, 00:16:18.165 "r_mbytes_per_sec": 0, 00:16:18.165 "w_mbytes_per_sec": 0 00:16:18.165 }, 00:16:18.165 "claimed": true, 00:16:18.165 "claim_type": "exclusive_write", 00:16:18.165 "zoned": false, 00:16:18.165 "supported_io_types": { 00:16:18.165 "read": true, 00:16:18.165 "write": true, 00:16:18.165 "unmap": true, 00:16:18.165 "flush": true, 00:16:18.165 "reset": true, 00:16:18.165 "nvme_admin": false, 00:16:18.165 "nvme_io": false, 00:16:18.165 "nvme_io_md": false, 00:16:18.165 "write_zeroes": true, 00:16:18.165 "zcopy": true, 00:16:18.165 "get_zone_info": false, 00:16:18.165 "zone_management": false, 00:16:18.165 "zone_append": false, 00:16:18.165 "compare": false, 00:16:18.165 "compare_and_write": false, 00:16:18.165 "abort": true, 00:16:18.165 "seek_hole": false, 00:16:18.165 "seek_data": false, 00:16:18.165 "copy": true, 00:16:18.165 "nvme_iov_md": false 00:16:18.165 }, 00:16:18.165 "memory_domains": [ 00:16:18.165 { 00:16:18.165 "dma_device_id": "system", 00:16:18.165 "dma_device_type": 1 00:16:18.165 }, 00:16:18.165 { 00:16:18.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.165 "dma_device_type": 2 00:16:18.165 } 00:16:18.165 ], 00:16:18.165 "driver_specific": {} 00:16:18.165 } 00:16:18.165 ] 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.165 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.166 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.166 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.166 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.166 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.166 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.166 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.166 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.166 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.166 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.166 "name": "Existed_Raid", 00:16:18.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.166 "strip_size_kb": 64, 00:16:18.166 "state": "configuring", 00:16:18.166 "raid_level": "raid5f", 00:16:18.166 "superblock": false, 00:16:18.166 "num_base_bdevs": 3, 00:16:18.166 "num_base_bdevs_discovered": 1, 00:16:18.166 "num_base_bdevs_operational": 3, 00:16:18.166 "base_bdevs_list": [ 00:16:18.166 { 00:16:18.166 "name": "BaseBdev1", 00:16:18.166 "uuid": "991bfc39-3248-4e0d-a9a7-7a52195c283f", 00:16:18.166 "is_configured": true, 00:16:18.166 "data_offset": 0, 00:16:18.166 "data_size": 65536 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "name": "BaseBdev2", 00:16:18.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.166 "is_configured": false, 00:16:18.166 "data_offset": 0, 00:16:18.166 "data_size": 0 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "name": "BaseBdev3", 00:16:18.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.166 "is_configured": false, 00:16:18.166 "data_offset": 0, 00:16:18.166 "data_size": 0 00:16:18.166 } 00:16:18.166 ] 00:16:18.166 }' 00:16:18.166 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.166 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.735 [2024-11-05 11:32:17.754899] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.735 [2024-11-05 11:32:17.754951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.735 [2024-11-05 11:32:17.766933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.735 [2024-11-05 11:32:17.768784] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.735 [2024-11-05 11:32:17.768822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.735 [2024-11-05 11:32:17.768833] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.735 [2024-11-05 11:32:17.768841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.735 "name": "Existed_Raid", 00:16:18.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.735 "strip_size_kb": 64, 00:16:18.735 "state": "configuring", 00:16:18.735 "raid_level": "raid5f", 00:16:18.735 "superblock": false, 00:16:18.735 "num_base_bdevs": 3, 00:16:18.735 "num_base_bdevs_discovered": 1, 00:16:18.735 "num_base_bdevs_operational": 3, 00:16:18.735 "base_bdevs_list": [ 00:16:18.735 { 00:16:18.735 "name": "BaseBdev1", 00:16:18.735 "uuid": "991bfc39-3248-4e0d-a9a7-7a52195c283f", 00:16:18.735 "is_configured": true, 00:16:18.735 "data_offset": 0, 00:16:18.735 "data_size": 65536 00:16:18.735 }, 00:16:18.735 { 00:16:18.735 "name": "BaseBdev2", 00:16:18.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.735 "is_configured": false, 00:16:18.735 "data_offset": 0, 00:16:18.735 "data_size": 0 00:16:18.735 }, 00:16:18.735 { 00:16:18.735 "name": "BaseBdev3", 00:16:18.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.735 "is_configured": false, 00:16:18.735 "data_offset": 0, 00:16:18.735 "data_size": 0 00:16:18.735 } 00:16:18.735 ] 00:16:18.735 }' 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.735 11:32:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.995 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:18.995 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.995 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.255 [2024-11-05 11:32:18.288581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.255 BaseBdev2 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.255 [ 00:16:19.255 { 00:16:19.255 "name": "BaseBdev2", 00:16:19.255 "aliases": [ 00:16:19.255 "14f98259-ae13-46ec-8d4d-e190e7a69995" 00:16:19.255 ], 00:16:19.255 "product_name": "Malloc disk", 00:16:19.255 "block_size": 512, 00:16:19.255 "num_blocks": 65536, 00:16:19.255 "uuid": "14f98259-ae13-46ec-8d4d-e190e7a69995", 00:16:19.255 "assigned_rate_limits": { 00:16:19.255 "rw_ios_per_sec": 0, 00:16:19.255 "rw_mbytes_per_sec": 0, 00:16:19.255 "r_mbytes_per_sec": 0, 00:16:19.255 "w_mbytes_per_sec": 0 00:16:19.255 }, 00:16:19.255 "claimed": true, 00:16:19.255 "claim_type": "exclusive_write", 00:16:19.255 "zoned": false, 00:16:19.255 "supported_io_types": { 00:16:19.255 "read": true, 00:16:19.255 "write": true, 00:16:19.255 "unmap": true, 00:16:19.255 "flush": true, 00:16:19.255 "reset": true, 00:16:19.255 "nvme_admin": false, 00:16:19.255 "nvme_io": false, 00:16:19.255 "nvme_io_md": false, 00:16:19.255 "write_zeroes": true, 00:16:19.255 "zcopy": true, 00:16:19.255 "get_zone_info": false, 00:16:19.255 "zone_management": false, 00:16:19.255 "zone_append": false, 00:16:19.255 "compare": false, 00:16:19.255 "compare_and_write": false, 00:16:19.255 "abort": true, 00:16:19.255 "seek_hole": false, 00:16:19.255 "seek_data": false, 00:16:19.255 "copy": true, 00:16:19.255 "nvme_iov_md": false 00:16:19.255 }, 00:16:19.255 "memory_domains": [ 00:16:19.255 { 00:16:19.255 "dma_device_id": "system", 00:16:19.255 "dma_device_type": 1 00:16:19.255 }, 00:16:19.255 { 00:16:19.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.255 "dma_device_type": 2 00:16:19.255 } 00:16:19.255 ], 00:16:19.255 "driver_specific": {} 00:16:19.255 } 00:16:19.255 ] 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.255 "name": "Existed_Raid", 00:16:19.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.255 "strip_size_kb": 64, 00:16:19.255 "state": "configuring", 00:16:19.255 "raid_level": "raid5f", 00:16:19.255 "superblock": false, 00:16:19.255 "num_base_bdevs": 3, 00:16:19.255 "num_base_bdevs_discovered": 2, 00:16:19.255 "num_base_bdevs_operational": 3, 00:16:19.255 "base_bdevs_list": [ 00:16:19.255 { 00:16:19.255 "name": "BaseBdev1", 00:16:19.255 "uuid": "991bfc39-3248-4e0d-a9a7-7a52195c283f", 00:16:19.255 "is_configured": true, 00:16:19.255 "data_offset": 0, 00:16:19.255 "data_size": 65536 00:16:19.255 }, 00:16:19.255 { 00:16:19.255 "name": "BaseBdev2", 00:16:19.255 "uuid": "14f98259-ae13-46ec-8d4d-e190e7a69995", 00:16:19.255 "is_configured": true, 00:16:19.255 "data_offset": 0, 00:16:19.255 "data_size": 65536 00:16:19.255 }, 00:16:19.255 { 00:16:19.255 "name": "BaseBdev3", 00:16:19.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.255 "is_configured": false, 00:16:19.255 "data_offset": 0, 00:16:19.255 "data_size": 0 00:16:19.255 } 00:16:19.255 ] 00:16:19.255 }' 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.255 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.515 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:19.515 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.515 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.775 [2024-11-05 11:32:18.839479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.775 [2024-11-05 11:32:18.839555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:19.775 [2024-11-05 11:32:18.839568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:19.775 [2024-11-05 11:32:18.839827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:19.775 [2024-11-05 11:32:18.844855] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:19.775 [2024-11-05 11:32:18.844878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:19.775 [2024-11-05 11:32:18.845161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.775 BaseBdev3 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.775 [ 00:16:19.775 { 00:16:19.775 "name": "BaseBdev3", 00:16:19.775 "aliases": [ 00:16:19.775 "047d6be1-8039-47fe-87cf-b64b2d618296" 00:16:19.775 ], 00:16:19.775 "product_name": "Malloc disk", 00:16:19.775 "block_size": 512, 00:16:19.775 "num_blocks": 65536, 00:16:19.775 "uuid": "047d6be1-8039-47fe-87cf-b64b2d618296", 00:16:19.775 "assigned_rate_limits": { 00:16:19.775 "rw_ios_per_sec": 0, 00:16:19.775 "rw_mbytes_per_sec": 0, 00:16:19.775 "r_mbytes_per_sec": 0, 00:16:19.775 "w_mbytes_per_sec": 0 00:16:19.775 }, 00:16:19.775 "claimed": true, 00:16:19.775 "claim_type": "exclusive_write", 00:16:19.775 "zoned": false, 00:16:19.775 "supported_io_types": { 00:16:19.775 "read": true, 00:16:19.775 "write": true, 00:16:19.775 "unmap": true, 00:16:19.775 "flush": true, 00:16:19.775 "reset": true, 00:16:19.775 "nvme_admin": false, 00:16:19.775 "nvme_io": false, 00:16:19.775 "nvme_io_md": false, 00:16:19.775 "write_zeroes": true, 00:16:19.775 "zcopy": true, 00:16:19.775 "get_zone_info": false, 00:16:19.775 "zone_management": false, 00:16:19.775 "zone_append": false, 00:16:19.775 "compare": false, 00:16:19.775 "compare_and_write": false, 00:16:19.775 "abort": true, 00:16:19.775 "seek_hole": false, 00:16:19.775 "seek_data": false, 00:16:19.775 "copy": true, 00:16:19.775 "nvme_iov_md": false 00:16:19.775 }, 00:16:19.775 "memory_domains": [ 00:16:19.775 { 00:16:19.775 "dma_device_id": "system", 00:16:19.775 "dma_device_type": 1 00:16:19.775 }, 00:16:19.775 { 00:16:19.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.775 "dma_device_type": 2 00:16:19.775 } 00:16:19.775 ], 00:16:19.775 "driver_specific": {} 00:16:19.775 } 00:16:19.775 ] 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.775 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.776 "name": "Existed_Raid", 00:16:19.776 "uuid": "3fbb2bac-277d-4342-9498-d05109d4a2f8", 00:16:19.776 "strip_size_kb": 64, 00:16:19.776 "state": "online", 00:16:19.776 "raid_level": "raid5f", 00:16:19.776 "superblock": false, 00:16:19.776 "num_base_bdevs": 3, 00:16:19.776 "num_base_bdevs_discovered": 3, 00:16:19.776 "num_base_bdevs_operational": 3, 00:16:19.776 "base_bdevs_list": [ 00:16:19.776 { 00:16:19.776 "name": "BaseBdev1", 00:16:19.776 "uuid": "991bfc39-3248-4e0d-a9a7-7a52195c283f", 00:16:19.776 "is_configured": true, 00:16:19.776 "data_offset": 0, 00:16:19.776 "data_size": 65536 00:16:19.776 }, 00:16:19.776 { 00:16:19.776 "name": "BaseBdev2", 00:16:19.776 "uuid": "14f98259-ae13-46ec-8d4d-e190e7a69995", 00:16:19.776 "is_configured": true, 00:16:19.776 "data_offset": 0, 00:16:19.776 "data_size": 65536 00:16:19.776 }, 00:16:19.776 { 00:16:19.776 "name": "BaseBdev3", 00:16:19.776 "uuid": "047d6be1-8039-47fe-87cf-b64b2d618296", 00:16:19.776 "is_configured": true, 00:16:19.776 "data_offset": 0, 00:16:19.776 "data_size": 65536 00:16:19.776 } 00:16:19.776 ] 00:16:19.776 }' 00:16:19.776 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.776 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.035 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:20.035 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:20.035 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:20.035 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:20.035 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:20.035 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:20.035 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:20.035 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:20.035 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.035 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.295 [2024-11-05 11:32:19.310542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.295 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.295 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:20.295 "name": "Existed_Raid", 00:16:20.295 "aliases": [ 00:16:20.295 "3fbb2bac-277d-4342-9498-d05109d4a2f8" 00:16:20.295 ], 00:16:20.295 "product_name": "Raid Volume", 00:16:20.295 "block_size": 512, 00:16:20.295 "num_blocks": 131072, 00:16:20.295 "uuid": "3fbb2bac-277d-4342-9498-d05109d4a2f8", 00:16:20.295 "assigned_rate_limits": { 00:16:20.295 "rw_ios_per_sec": 0, 00:16:20.295 "rw_mbytes_per_sec": 0, 00:16:20.295 "r_mbytes_per_sec": 0, 00:16:20.295 "w_mbytes_per_sec": 0 00:16:20.295 }, 00:16:20.295 "claimed": false, 00:16:20.295 "zoned": false, 00:16:20.295 "supported_io_types": { 00:16:20.295 "read": true, 00:16:20.295 "write": true, 00:16:20.295 "unmap": false, 00:16:20.295 "flush": false, 00:16:20.295 "reset": true, 00:16:20.295 "nvme_admin": false, 00:16:20.295 "nvme_io": false, 00:16:20.295 "nvme_io_md": false, 00:16:20.295 "write_zeroes": true, 00:16:20.295 "zcopy": false, 00:16:20.295 "get_zone_info": false, 00:16:20.295 "zone_management": false, 00:16:20.295 "zone_append": false, 00:16:20.295 "compare": false, 00:16:20.295 "compare_and_write": false, 00:16:20.295 "abort": false, 00:16:20.295 "seek_hole": false, 00:16:20.295 "seek_data": false, 00:16:20.295 "copy": false, 00:16:20.295 "nvme_iov_md": false 00:16:20.295 }, 00:16:20.295 "driver_specific": { 00:16:20.295 "raid": { 00:16:20.295 "uuid": "3fbb2bac-277d-4342-9498-d05109d4a2f8", 00:16:20.295 "strip_size_kb": 64, 00:16:20.295 "state": "online", 00:16:20.295 "raid_level": "raid5f", 00:16:20.295 "superblock": false, 00:16:20.295 "num_base_bdevs": 3, 00:16:20.295 "num_base_bdevs_discovered": 3, 00:16:20.295 "num_base_bdevs_operational": 3, 00:16:20.295 "base_bdevs_list": [ 00:16:20.295 { 00:16:20.295 "name": "BaseBdev1", 00:16:20.295 "uuid": "991bfc39-3248-4e0d-a9a7-7a52195c283f", 00:16:20.295 "is_configured": true, 00:16:20.295 "data_offset": 0, 00:16:20.295 "data_size": 65536 00:16:20.295 }, 00:16:20.295 { 00:16:20.295 "name": "BaseBdev2", 00:16:20.295 "uuid": "14f98259-ae13-46ec-8d4d-e190e7a69995", 00:16:20.295 "is_configured": true, 00:16:20.295 "data_offset": 0, 00:16:20.295 "data_size": 65536 00:16:20.295 }, 00:16:20.295 { 00:16:20.295 "name": "BaseBdev3", 00:16:20.295 "uuid": "047d6be1-8039-47fe-87cf-b64b2d618296", 00:16:20.295 "is_configured": true, 00:16:20.295 "data_offset": 0, 00:16:20.295 "data_size": 65536 00:16:20.295 } 00:16:20.295 ] 00:16:20.295 } 00:16:20.295 } 00:16:20.295 }' 00:16:20.295 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:20.295 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:20.295 BaseBdev2 00:16:20.295 BaseBdev3' 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.296 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.555 [2024-11-05 11:32:19.573982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.555 "name": "Existed_Raid", 00:16:20.555 "uuid": "3fbb2bac-277d-4342-9498-d05109d4a2f8", 00:16:20.555 "strip_size_kb": 64, 00:16:20.555 "state": "online", 00:16:20.555 "raid_level": "raid5f", 00:16:20.555 "superblock": false, 00:16:20.555 "num_base_bdevs": 3, 00:16:20.555 "num_base_bdevs_discovered": 2, 00:16:20.555 "num_base_bdevs_operational": 2, 00:16:20.555 "base_bdevs_list": [ 00:16:20.555 { 00:16:20.555 "name": null, 00:16:20.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.555 "is_configured": false, 00:16:20.555 "data_offset": 0, 00:16:20.555 "data_size": 65536 00:16:20.555 }, 00:16:20.555 { 00:16:20.555 "name": "BaseBdev2", 00:16:20.555 "uuid": "14f98259-ae13-46ec-8d4d-e190e7a69995", 00:16:20.555 "is_configured": true, 00:16:20.555 "data_offset": 0, 00:16:20.555 "data_size": 65536 00:16:20.555 }, 00:16:20.555 { 00:16:20.555 "name": "BaseBdev3", 00:16:20.555 "uuid": "047d6be1-8039-47fe-87cf-b64b2d618296", 00:16:20.555 "is_configured": true, 00:16:20.555 "data_offset": 0, 00:16:20.555 "data_size": 65536 00:16:20.555 } 00:16:20.555 ] 00:16:20.555 }' 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.555 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.814 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:20.814 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.814 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.814 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.814 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.814 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.814 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.073 [2024-11-05 11:32:20.099609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:21.073 [2024-11-05 11:32:20.099710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.073 [2024-11-05 11:32:20.189259] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.073 [2024-11-05 11:32:20.249200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:21.073 [2024-11-05 11:32:20.249246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.073 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.333 BaseBdev2 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.333 [ 00:16:21.333 { 00:16:21.333 "name": "BaseBdev2", 00:16:21.333 "aliases": [ 00:16:21.333 "5c58ec4f-0ec3-4d8f-a268-92726b7633ab" 00:16:21.333 ], 00:16:21.333 "product_name": "Malloc disk", 00:16:21.333 "block_size": 512, 00:16:21.333 "num_blocks": 65536, 00:16:21.333 "uuid": "5c58ec4f-0ec3-4d8f-a268-92726b7633ab", 00:16:21.333 "assigned_rate_limits": { 00:16:21.333 "rw_ios_per_sec": 0, 00:16:21.333 "rw_mbytes_per_sec": 0, 00:16:21.333 "r_mbytes_per_sec": 0, 00:16:21.333 "w_mbytes_per_sec": 0 00:16:21.333 }, 00:16:21.333 "claimed": false, 00:16:21.333 "zoned": false, 00:16:21.333 "supported_io_types": { 00:16:21.333 "read": true, 00:16:21.333 "write": true, 00:16:21.333 "unmap": true, 00:16:21.333 "flush": true, 00:16:21.333 "reset": true, 00:16:21.333 "nvme_admin": false, 00:16:21.333 "nvme_io": false, 00:16:21.333 "nvme_io_md": false, 00:16:21.333 "write_zeroes": true, 00:16:21.333 "zcopy": true, 00:16:21.333 "get_zone_info": false, 00:16:21.333 "zone_management": false, 00:16:21.333 "zone_append": false, 00:16:21.333 "compare": false, 00:16:21.333 "compare_and_write": false, 00:16:21.333 "abort": true, 00:16:21.333 "seek_hole": false, 00:16:21.333 "seek_data": false, 00:16:21.333 "copy": true, 00:16:21.333 "nvme_iov_md": false 00:16:21.333 }, 00:16:21.333 "memory_domains": [ 00:16:21.333 { 00:16:21.333 "dma_device_id": "system", 00:16:21.333 "dma_device_type": 1 00:16:21.333 }, 00:16:21.333 { 00:16:21.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.333 "dma_device_type": 2 00:16:21.333 } 00:16:21.333 ], 00:16:21.333 "driver_specific": {} 00:16:21.333 } 00:16:21.333 ] 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.333 BaseBdev3 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:21.333 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.334 [ 00:16:21.334 { 00:16:21.334 "name": "BaseBdev3", 00:16:21.334 "aliases": [ 00:16:21.334 "a2cad39b-da67-4efd-9a3f-ba5c5cc935b5" 00:16:21.334 ], 00:16:21.334 "product_name": "Malloc disk", 00:16:21.334 "block_size": 512, 00:16:21.334 "num_blocks": 65536, 00:16:21.334 "uuid": "a2cad39b-da67-4efd-9a3f-ba5c5cc935b5", 00:16:21.334 "assigned_rate_limits": { 00:16:21.334 "rw_ios_per_sec": 0, 00:16:21.334 "rw_mbytes_per_sec": 0, 00:16:21.334 "r_mbytes_per_sec": 0, 00:16:21.334 "w_mbytes_per_sec": 0 00:16:21.334 }, 00:16:21.334 "claimed": false, 00:16:21.334 "zoned": false, 00:16:21.334 "supported_io_types": { 00:16:21.334 "read": true, 00:16:21.334 "write": true, 00:16:21.334 "unmap": true, 00:16:21.334 "flush": true, 00:16:21.334 "reset": true, 00:16:21.334 "nvme_admin": false, 00:16:21.334 "nvme_io": false, 00:16:21.334 "nvme_io_md": false, 00:16:21.334 "write_zeroes": true, 00:16:21.334 "zcopy": true, 00:16:21.334 "get_zone_info": false, 00:16:21.334 "zone_management": false, 00:16:21.334 "zone_append": false, 00:16:21.334 "compare": false, 00:16:21.334 "compare_and_write": false, 00:16:21.334 "abort": true, 00:16:21.334 "seek_hole": false, 00:16:21.334 "seek_data": false, 00:16:21.334 "copy": true, 00:16:21.334 "nvme_iov_md": false 00:16:21.334 }, 00:16:21.334 "memory_domains": [ 00:16:21.334 { 00:16:21.334 "dma_device_id": "system", 00:16:21.334 "dma_device_type": 1 00:16:21.334 }, 00:16:21.334 { 00:16:21.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.334 "dma_device_type": 2 00:16:21.334 } 00:16:21.334 ], 00:16:21.334 "driver_specific": {} 00:16:21.334 } 00:16:21.334 ] 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.334 [2024-11-05 11:32:20.541132] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:21.334 [2024-11-05 11:32:20.541270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:21.334 [2024-11-05 11:32:20.541295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.334 [2024-11-05 11:32:20.543030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.334 "name": "Existed_Raid", 00:16:21.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.334 "strip_size_kb": 64, 00:16:21.334 "state": "configuring", 00:16:21.334 "raid_level": "raid5f", 00:16:21.334 "superblock": false, 00:16:21.334 "num_base_bdevs": 3, 00:16:21.334 "num_base_bdevs_discovered": 2, 00:16:21.334 "num_base_bdevs_operational": 3, 00:16:21.334 "base_bdevs_list": [ 00:16:21.334 { 00:16:21.334 "name": "BaseBdev1", 00:16:21.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.334 "is_configured": false, 00:16:21.334 "data_offset": 0, 00:16:21.334 "data_size": 0 00:16:21.334 }, 00:16:21.334 { 00:16:21.334 "name": "BaseBdev2", 00:16:21.334 "uuid": "5c58ec4f-0ec3-4d8f-a268-92726b7633ab", 00:16:21.334 "is_configured": true, 00:16:21.334 "data_offset": 0, 00:16:21.334 "data_size": 65536 00:16:21.334 }, 00:16:21.334 { 00:16:21.334 "name": "BaseBdev3", 00:16:21.334 "uuid": "a2cad39b-da67-4efd-9a3f-ba5c5cc935b5", 00:16:21.334 "is_configured": true, 00:16:21.334 "data_offset": 0, 00:16:21.334 "data_size": 65536 00:16:21.334 } 00:16:21.334 ] 00:16:21.334 }' 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.334 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.904 [2024-11-05 11:32:20.984374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.904 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.904 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.904 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.904 "name": "Existed_Raid", 00:16:21.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.904 "strip_size_kb": 64, 00:16:21.904 "state": "configuring", 00:16:21.904 "raid_level": "raid5f", 00:16:21.904 "superblock": false, 00:16:21.904 "num_base_bdevs": 3, 00:16:21.904 "num_base_bdevs_discovered": 1, 00:16:21.904 "num_base_bdevs_operational": 3, 00:16:21.904 "base_bdevs_list": [ 00:16:21.904 { 00:16:21.904 "name": "BaseBdev1", 00:16:21.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.904 "is_configured": false, 00:16:21.904 "data_offset": 0, 00:16:21.904 "data_size": 0 00:16:21.904 }, 00:16:21.904 { 00:16:21.904 "name": null, 00:16:21.904 "uuid": "5c58ec4f-0ec3-4d8f-a268-92726b7633ab", 00:16:21.904 "is_configured": false, 00:16:21.904 "data_offset": 0, 00:16:21.904 "data_size": 65536 00:16:21.904 }, 00:16:21.904 { 00:16:21.904 "name": "BaseBdev3", 00:16:21.904 "uuid": "a2cad39b-da67-4efd-9a3f-ba5c5cc935b5", 00:16:21.904 "is_configured": true, 00:16:21.904 "data_offset": 0, 00:16:21.904 "data_size": 65536 00:16:21.904 } 00:16:21.904 ] 00:16:21.904 }' 00:16:21.904 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.904 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.164 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.164 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:22.164 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.164 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.164 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.424 [2024-11-05 11:32:21.487041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.424 BaseBdev1 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:22.424 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.425 [ 00:16:22.425 { 00:16:22.425 "name": "BaseBdev1", 00:16:22.425 "aliases": [ 00:16:22.425 "7bb7d77a-9d06-456d-838a-1b06cfcb9aee" 00:16:22.425 ], 00:16:22.425 "product_name": "Malloc disk", 00:16:22.425 "block_size": 512, 00:16:22.425 "num_blocks": 65536, 00:16:22.425 "uuid": "7bb7d77a-9d06-456d-838a-1b06cfcb9aee", 00:16:22.425 "assigned_rate_limits": { 00:16:22.425 "rw_ios_per_sec": 0, 00:16:22.425 "rw_mbytes_per_sec": 0, 00:16:22.425 "r_mbytes_per_sec": 0, 00:16:22.425 "w_mbytes_per_sec": 0 00:16:22.425 }, 00:16:22.425 "claimed": true, 00:16:22.425 "claim_type": "exclusive_write", 00:16:22.425 "zoned": false, 00:16:22.425 "supported_io_types": { 00:16:22.425 "read": true, 00:16:22.425 "write": true, 00:16:22.425 "unmap": true, 00:16:22.425 "flush": true, 00:16:22.425 "reset": true, 00:16:22.425 "nvme_admin": false, 00:16:22.425 "nvme_io": false, 00:16:22.425 "nvme_io_md": false, 00:16:22.425 "write_zeroes": true, 00:16:22.425 "zcopy": true, 00:16:22.425 "get_zone_info": false, 00:16:22.425 "zone_management": false, 00:16:22.425 "zone_append": false, 00:16:22.425 "compare": false, 00:16:22.425 "compare_and_write": false, 00:16:22.425 "abort": true, 00:16:22.425 "seek_hole": false, 00:16:22.425 "seek_data": false, 00:16:22.425 "copy": true, 00:16:22.425 "nvme_iov_md": false 00:16:22.425 }, 00:16:22.425 "memory_domains": [ 00:16:22.425 { 00:16:22.425 "dma_device_id": "system", 00:16:22.425 "dma_device_type": 1 00:16:22.425 }, 00:16:22.425 { 00:16:22.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.425 "dma_device_type": 2 00:16:22.425 } 00:16:22.425 ], 00:16:22.425 "driver_specific": {} 00:16:22.425 } 00:16:22.425 ] 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.425 "name": "Existed_Raid", 00:16:22.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.425 "strip_size_kb": 64, 00:16:22.425 "state": "configuring", 00:16:22.425 "raid_level": "raid5f", 00:16:22.425 "superblock": false, 00:16:22.425 "num_base_bdevs": 3, 00:16:22.425 "num_base_bdevs_discovered": 2, 00:16:22.425 "num_base_bdevs_operational": 3, 00:16:22.425 "base_bdevs_list": [ 00:16:22.425 { 00:16:22.425 "name": "BaseBdev1", 00:16:22.425 "uuid": "7bb7d77a-9d06-456d-838a-1b06cfcb9aee", 00:16:22.425 "is_configured": true, 00:16:22.425 "data_offset": 0, 00:16:22.425 "data_size": 65536 00:16:22.425 }, 00:16:22.425 { 00:16:22.425 "name": null, 00:16:22.425 "uuid": "5c58ec4f-0ec3-4d8f-a268-92726b7633ab", 00:16:22.425 "is_configured": false, 00:16:22.425 "data_offset": 0, 00:16:22.425 "data_size": 65536 00:16:22.425 }, 00:16:22.425 { 00:16:22.425 "name": "BaseBdev3", 00:16:22.425 "uuid": "a2cad39b-da67-4efd-9a3f-ba5c5cc935b5", 00:16:22.425 "is_configured": true, 00:16:22.425 "data_offset": 0, 00:16:22.425 "data_size": 65536 00:16:22.425 } 00:16:22.425 ] 00:16:22.425 }' 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.425 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.685 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.685 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.685 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.685 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:22.685 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.945 [2024-11-05 11:32:21.982227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.945 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.945 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.945 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.945 "name": "Existed_Raid", 00:16:22.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.945 "strip_size_kb": 64, 00:16:22.945 "state": "configuring", 00:16:22.945 "raid_level": "raid5f", 00:16:22.945 "superblock": false, 00:16:22.945 "num_base_bdevs": 3, 00:16:22.945 "num_base_bdevs_discovered": 1, 00:16:22.945 "num_base_bdevs_operational": 3, 00:16:22.945 "base_bdevs_list": [ 00:16:22.945 { 00:16:22.945 "name": "BaseBdev1", 00:16:22.945 "uuid": "7bb7d77a-9d06-456d-838a-1b06cfcb9aee", 00:16:22.945 "is_configured": true, 00:16:22.945 "data_offset": 0, 00:16:22.945 "data_size": 65536 00:16:22.945 }, 00:16:22.945 { 00:16:22.945 "name": null, 00:16:22.945 "uuid": "5c58ec4f-0ec3-4d8f-a268-92726b7633ab", 00:16:22.945 "is_configured": false, 00:16:22.945 "data_offset": 0, 00:16:22.945 "data_size": 65536 00:16:22.945 }, 00:16:22.945 { 00:16:22.945 "name": null, 00:16:22.945 "uuid": "a2cad39b-da67-4efd-9a3f-ba5c5cc935b5", 00:16:22.945 "is_configured": false, 00:16:22.945 "data_offset": 0, 00:16:22.945 "data_size": 65536 00:16:22.945 } 00:16:22.945 ] 00:16:22.945 }' 00:16:22.945 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.945 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.205 [2024-11-05 11:32:22.461412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.205 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.465 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.465 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.465 "name": "Existed_Raid", 00:16:23.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.465 "strip_size_kb": 64, 00:16:23.465 "state": "configuring", 00:16:23.465 "raid_level": "raid5f", 00:16:23.465 "superblock": false, 00:16:23.465 "num_base_bdevs": 3, 00:16:23.465 "num_base_bdevs_discovered": 2, 00:16:23.465 "num_base_bdevs_operational": 3, 00:16:23.465 "base_bdevs_list": [ 00:16:23.465 { 00:16:23.465 "name": "BaseBdev1", 00:16:23.465 "uuid": "7bb7d77a-9d06-456d-838a-1b06cfcb9aee", 00:16:23.465 "is_configured": true, 00:16:23.465 "data_offset": 0, 00:16:23.465 "data_size": 65536 00:16:23.465 }, 00:16:23.465 { 00:16:23.465 "name": null, 00:16:23.465 "uuid": "5c58ec4f-0ec3-4d8f-a268-92726b7633ab", 00:16:23.465 "is_configured": false, 00:16:23.465 "data_offset": 0, 00:16:23.465 "data_size": 65536 00:16:23.465 }, 00:16:23.465 { 00:16:23.465 "name": "BaseBdev3", 00:16:23.465 "uuid": "a2cad39b-da67-4efd-9a3f-ba5c5cc935b5", 00:16:23.465 "is_configured": true, 00:16:23.465 "data_offset": 0, 00:16:23.465 "data_size": 65536 00:16:23.465 } 00:16:23.465 ] 00:16:23.465 }' 00:16:23.465 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.465 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.725 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.725 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.725 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.725 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:23.725 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.725 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:23.725 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:23.725 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.725 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.725 [2024-11-05 11:32:22.952606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.985 "name": "Existed_Raid", 00:16:23.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.985 "strip_size_kb": 64, 00:16:23.985 "state": "configuring", 00:16:23.985 "raid_level": "raid5f", 00:16:23.985 "superblock": false, 00:16:23.985 "num_base_bdevs": 3, 00:16:23.985 "num_base_bdevs_discovered": 1, 00:16:23.985 "num_base_bdevs_operational": 3, 00:16:23.985 "base_bdevs_list": [ 00:16:23.985 { 00:16:23.985 "name": null, 00:16:23.985 "uuid": "7bb7d77a-9d06-456d-838a-1b06cfcb9aee", 00:16:23.985 "is_configured": false, 00:16:23.985 "data_offset": 0, 00:16:23.985 "data_size": 65536 00:16:23.985 }, 00:16:23.985 { 00:16:23.985 "name": null, 00:16:23.985 "uuid": "5c58ec4f-0ec3-4d8f-a268-92726b7633ab", 00:16:23.985 "is_configured": false, 00:16:23.985 "data_offset": 0, 00:16:23.985 "data_size": 65536 00:16:23.985 }, 00:16:23.985 { 00:16:23.985 "name": "BaseBdev3", 00:16:23.985 "uuid": "a2cad39b-da67-4efd-9a3f-ba5c5cc935b5", 00:16:23.985 "is_configured": true, 00:16:23.985 "data_offset": 0, 00:16:23.985 "data_size": 65536 00:16:23.985 } 00:16:23.985 ] 00:16:23.985 }' 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.985 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.256 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.256 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.256 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.256 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:24.256 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.535 [2024-11-05 11:32:23.536815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.535 "name": "Existed_Raid", 00:16:24.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.535 "strip_size_kb": 64, 00:16:24.535 "state": "configuring", 00:16:24.535 "raid_level": "raid5f", 00:16:24.535 "superblock": false, 00:16:24.535 "num_base_bdevs": 3, 00:16:24.535 "num_base_bdevs_discovered": 2, 00:16:24.535 "num_base_bdevs_operational": 3, 00:16:24.535 "base_bdevs_list": [ 00:16:24.535 { 00:16:24.535 "name": null, 00:16:24.535 "uuid": "7bb7d77a-9d06-456d-838a-1b06cfcb9aee", 00:16:24.535 "is_configured": false, 00:16:24.535 "data_offset": 0, 00:16:24.535 "data_size": 65536 00:16:24.535 }, 00:16:24.535 { 00:16:24.535 "name": "BaseBdev2", 00:16:24.535 "uuid": "5c58ec4f-0ec3-4d8f-a268-92726b7633ab", 00:16:24.535 "is_configured": true, 00:16:24.535 "data_offset": 0, 00:16:24.535 "data_size": 65536 00:16:24.535 }, 00:16:24.535 { 00:16:24.535 "name": "BaseBdev3", 00:16:24.535 "uuid": "a2cad39b-da67-4efd-9a3f-ba5c5cc935b5", 00:16:24.535 "is_configured": true, 00:16:24.535 "data_offset": 0, 00:16:24.535 "data_size": 65536 00:16:24.535 } 00:16:24.535 ] 00:16:24.535 }' 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.535 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.795 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.795 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.795 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.795 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:24.795 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.795 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:24.795 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.795 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.795 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.795 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:24.795 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7bb7d77a-9d06-456d-838a-1b06cfcb9aee 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.055 [2024-11-05 11:32:24.113436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:25.055 [2024-11-05 11:32:24.113492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:25.055 [2024-11-05 11:32:24.113501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:25.055 [2024-11-05 11:32:24.113738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:25.055 [2024-11-05 11:32:24.118790] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:25.055 [2024-11-05 11:32:24.118812] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:25.055 [2024-11-05 11:32:24.119081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.055 NewBaseBdev 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.055 [ 00:16:25.055 { 00:16:25.055 "name": "NewBaseBdev", 00:16:25.055 "aliases": [ 00:16:25.055 "7bb7d77a-9d06-456d-838a-1b06cfcb9aee" 00:16:25.055 ], 00:16:25.055 "product_name": "Malloc disk", 00:16:25.055 "block_size": 512, 00:16:25.055 "num_blocks": 65536, 00:16:25.055 "uuid": "7bb7d77a-9d06-456d-838a-1b06cfcb9aee", 00:16:25.055 "assigned_rate_limits": { 00:16:25.055 "rw_ios_per_sec": 0, 00:16:25.055 "rw_mbytes_per_sec": 0, 00:16:25.055 "r_mbytes_per_sec": 0, 00:16:25.055 "w_mbytes_per_sec": 0 00:16:25.055 }, 00:16:25.055 "claimed": true, 00:16:25.055 "claim_type": "exclusive_write", 00:16:25.055 "zoned": false, 00:16:25.055 "supported_io_types": { 00:16:25.055 "read": true, 00:16:25.055 "write": true, 00:16:25.055 "unmap": true, 00:16:25.055 "flush": true, 00:16:25.055 "reset": true, 00:16:25.055 "nvme_admin": false, 00:16:25.055 "nvme_io": false, 00:16:25.055 "nvme_io_md": false, 00:16:25.055 "write_zeroes": true, 00:16:25.055 "zcopy": true, 00:16:25.055 "get_zone_info": false, 00:16:25.055 "zone_management": false, 00:16:25.055 "zone_append": false, 00:16:25.055 "compare": false, 00:16:25.055 "compare_and_write": false, 00:16:25.055 "abort": true, 00:16:25.055 "seek_hole": false, 00:16:25.055 "seek_data": false, 00:16:25.055 "copy": true, 00:16:25.055 "nvme_iov_md": false 00:16:25.055 }, 00:16:25.055 "memory_domains": [ 00:16:25.055 { 00:16:25.055 "dma_device_id": "system", 00:16:25.055 "dma_device_type": 1 00:16:25.055 }, 00:16:25.055 { 00:16:25.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.055 "dma_device_type": 2 00:16:25.055 } 00:16:25.055 ], 00:16:25.055 "driver_specific": {} 00:16:25.055 } 00:16:25.055 ] 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.055 "name": "Existed_Raid", 00:16:25.055 "uuid": "491ace9d-9470-4f99-beca-1bc5f8811f11", 00:16:25.055 "strip_size_kb": 64, 00:16:25.055 "state": "online", 00:16:25.055 "raid_level": "raid5f", 00:16:25.055 "superblock": false, 00:16:25.055 "num_base_bdevs": 3, 00:16:25.055 "num_base_bdevs_discovered": 3, 00:16:25.055 "num_base_bdevs_operational": 3, 00:16:25.055 "base_bdevs_list": [ 00:16:25.055 { 00:16:25.055 "name": "NewBaseBdev", 00:16:25.055 "uuid": "7bb7d77a-9d06-456d-838a-1b06cfcb9aee", 00:16:25.055 "is_configured": true, 00:16:25.055 "data_offset": 0, 00:16:25.055 "data_size": 65536 00:16:25.055 }, 00:16:25.055 { 00:16:25.055 "name": "BaseBdev2", 00:16:25.055 "uuid": "5c58ec4f-0ec3-4d8f-a268-92726b7633ab", 00:16:25.055 "is_configured": true, 00:16:25.055 "data_offset": 0, 00:16:25.055 "data_size": 65536 00:16:25.055 }, 00:16:25.055 { 00:16:25.055 "name": "BaseBdev3", 00:16:25.055 "uuid": "a2cad39b-da67-4efd-9a3f-ba5c5cc935b5", 00:16:25.055 "is_configured": true, 00:16:25.055 "data_offset": 0, 00:16:25.055 "data_size": 65536 00:16:25.055 } 00:16:25.055 ] 00:16:25.055 }' 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.055 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.320 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:25.320 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:25.320 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:25.320 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:25.320 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:25.321 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:25.321 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:25.321 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:25.321 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.321 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.321 [2024-11-05 11:32:24.580623] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.321 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:25.593 "name": "Existed_Raid", 00:16:25.593 "aliases": [ 00:16:25.593 "491ace9d-9470-4f99-beca-1bc5f8811f11" 00:16:25.593 ], 00:16:25.593 "product_name": "Raid Volume", 00:16:25.593 "block_size": 512, 00:16:25.593 "num_blocks": 131072, 00:16:25.593 "uuid": "491ace9d-9470-4f99-beca-1bc5f8811f11", 00:16:25.593 "assigned_rate_limits": { 00:16:25.593 "rw_ios_per_sec": 0, 00:16:25.593 "rw_mbytes_per_sec": 0, 00:16:25.593 "r_mbytes_per_sec": 0, 00:16:25.593 "w_mbytes_per_sec": 0 00:16:25.593 }, 00:16:25.593 "claimed": false, 00:16:25.593 "zoned": false, 00:16:25.593 "supported_io_types": { 00:16:25.593 "read": true, 00:16:25.593 "write": true, 00:16:25.593 "unmap": false, 00:16:25.593 "flush": false, 00:16:25.593 "reset": true, 00:16:25.593 "nvme_admin": false, 00:16:25.593 "nvme_io": false, 00:16:25.593 "nvme_io_md": false, 00:16:25.593 "write_zeroes": true, 00:16:25.593 "zcopy": false, 00:16:25.593 "get_zone_info": false, 00:16:25.593 "zone_management": false, 00:16:25.593 "zone_append": false, 00:16:25.593 "compare": false, 00:16:25.593 "compare_and_write": false, 00:16:25.593 "abort": false, 00:16:25.593 "seek_hole": false, 00:16:25.593 "seek_data": false, 00:16:25.593 "copy": false, 00:16:25.593 "nvme_iov_md": false 00:16:25.593 }, 00:16:25.593 "driver_specific": { 00:16:25.593 "raid": { 00:16:25.593 "uuid": "491ace9d-9470-4f99-beca-1bc5f8811f11", 00:16:25.593 "strip_size_kb": 64, 00:16:25.593 "state": "online", 00:16:25.593 "raid_level": "raid5f", 00:16:25.593 "superblock": false, 00:16:25.593 "num_base_bdevs": 3, 00:16:25.593 "num_base_bdevs_discovered": 3, 00:16:25.593 "num_base_bdevs_operational": 3, 00:16:25.593 "base_bdevs_list": [ 00:16:25.593 { 00:16:25.593 "name": "NewBaseBdev", 00:16:25.593 "uuid": "7bb7d77a-9d06-456d-838a-1b06cfcb9aee", 00:16:25.593 "is_configured": true, 00:16:25.593 "data_offset": 0, 00:16:25.593 "data_size": 65536 00:16:25.593 }, 00:16:25.593 { 00:16:25.593 "name": "BaseBdev2", 00:16:25.593 "uuid": "5c58ec4f-0ec3-4d8f-a268-92726b7633ab", 00:16:25.593 "is_configured": true, 00:16:25.593 "data_offset": 0, 00:16:25.593 "data_size": 65536 00:16:25.593 }, 00:16:25.593 { 00:16:25.593 "name": "BaseBdev3", 00:16:25.593 "uuid": "a2cad39b-da67-4efd-9a3f-ba5c5cc935b5", 00:16:25.593 "is_configured": true, 00:16:25.593 "data_offset": 0, 00:16:25.593 "data_size": 65536 00:16:25.593 } 00:16:25.593 ] 00:16:25.593 } 00:16:25.593 } 00:16:25.593 }' 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:25.593 BaseBdev2 00:16:25.593 BaseBdev3' 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.593 [2024-11-05 11:32:24.832029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.593 [2024-11-05 11:32:24.832057] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.593 [2024-11-05 11:32:24.832121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.593 [2024-11-05 11:32:24.832405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.593 [2024-11-05 11:32:24.832424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79935 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 79935 ']' 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 79935 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:25.593 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79935 00:16:25.853 killing process with pid 79935 00:16:25.853 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:25.853 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:25.853 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79935' 00:16:25.853 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 79935 00:16:25.853 [2024-11-05 11:32:24.869252] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.853 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 79935 00:16:26.113 [2024-11-05 11:32:25.155592] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:27.052 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:27.052 00:16:27.052 real 0m10.347s 00:16:27.052 user 0m16.469s 00:16:27.052 sys 0m1.877s 00:16:27.052 11:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:27.052 11:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.052 ************************************ 00:16:27.052 END TEST raid5f_state_function_test 00:16:27.052 ************************************ 00:16:27.053 11:32:26 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:16:27.053 11:32:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:27.053 11:32:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:27.053 11:32:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:27.053 ************************************ 00:16:27.053 START TEST raid5f_state_function_test_sb 00:16:27.053 ************************************ 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80555 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80555' 00:16:27.053 Process raid pid: 80555 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80555 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 80555 ']' 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:27.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:27.053 11:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.313 [2024-11-05 11:32:26.358368] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:16:27.313 [2024-11-05 11:32:26.358492] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.313 [2024-11-05 11:32:26.525881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.572 [2024-11-05 11:32:26.630816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.572 [2024-11-05 11:32:26.820306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.572 [2024-11-05 11:32:26.820343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.143 [2024-11-05 11:32:27.172987] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.143 [2024-11-05 11:32:27.173041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.143 [2024-11-05 11:32:27.173055] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.143 [2024-11-05 11:32:27.173064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.143 [2024-11-05 11:32:27.173086] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:28.143 [2024-11-05 11:32:27.173094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.143 "name": "Existed_Raid", 00:16:28.143 "uuid": "7b431ee5-8c73-4091-aab6-9d812d5966be", 00:16:28.143 "strip_size_kb": 64, 00:16:28.143 "state": "configuring", 00:16:28.143 "raid_level": "raid5f", 00:16:28.143 "superblock": true, 00:16:28.143 "num_base_bdevs": 3, 00:16:28.143 "num_base_bdevs_discovered": 0, 00:16:28.143 "num_base_bdevs_operational": 3, 00:16:28.143 "base_bdevs_list": [ 00:16:28.143 { 00:16:28.143 "name": "BaseBdev1", 00:16:28.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.143 "is_configured": false, 00:16:28.143 "data_offset": 0, 00:16:28.143 "data_size": 0 00:16:28.143 }, 00:16:28.143 { 00:16:28.143 "name": "BaseBdev2", 00:16:28.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.143 "is_configured": false, 00:16:28.143 "data_offset": 0, 00:16:28.143 "data_size": 0 00:16:28.143 }, 00:16:28.143 { 00:16:28.143 "name": "BaseBdev3", 00:16:28.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.143 "is_configured": false, 00:16:28.143 "data_offset": 0, 00:16:28.143 "data_size": 0 00:16:28.143 } 00:16:28.143 ] 00:16:28.143 }' 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.143 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.403 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:28.403 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.403 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.403 [2024-11-05 11:32:27.628152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:28.403 [2024-11-05 11:32:27.628190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:28.403 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.403 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:28.403 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.403 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.403 [2024-11-05 11:32:27.640142] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.403 [2024-11-05 11:32:27.640181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.403 [2024-11-05 11:32:27.640190] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.403 [2024-11-05 11:32:27.640215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.403 [2024-11-05 11:32:27.640221] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:28.403 [2024-11-05 11:32:27.640229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:28.403 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.403 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:28.403 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.403 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.663 [2024-11-05 11:32:27.684615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.663 BaseBdev1 00:16:28.663 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.663 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:28.663 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:28.663 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:28.663 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:28.663 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:28.663 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:28.663 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:28.663 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.663 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.663 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.664 [ 00:16:28.664 { 00:16:28.664 "name": "BaseBdev1", 00:16:28.664 "aliases": [ 00:16:28.664 "9ea3ef05-03f0-4f1c-8e43-63a56191f21f" 00:16:28.664 ], 00:16:28.664 "product_name": "Malloc disk", 00:16:28.664 "block_size": 512, 00:16:28.664 "num_blocks": 65536, 00:16:28.664 "uuid": "9ea3ef05-03f0-4f1c-8e43-63a56191f21f", 00:16:28.664 "assigned_rate_limits": { 00:16:28.664 "rw_ios_per_sec": 0, 00:16:28.664 "rw_mbytes_per_sec": 0, 00:16:28.664 "r_mbytes_per_sec": 0, 00:16:28.664 "w_mbytes_per_sec": 0 00:16:28.664 }, 00:16:28.664 "claimed": true, 00:16:28.664 "claim_type": "exclusive_write", 00:16:28.664 "zoned": false, 00:16:28.664 "supported_io_types": { 00:16:28.664 "read": true, 00:16:28.664 "write": true, 00:16:28.664 "unmap": true, 00:16:28.664 "flush": true, 00:16:28.664 "reset": true, 00:16:28.664 "nvme_admin": false, 00:16:28.664 "nvme_io": false, 00:16:28.664 "nvme_io_md": false, 00:16:28.664 "write_zeroes": true, 00:16:28.664 "zcopy": true, 00:16:28.664 "get_zone_info": false, 00:16:28.664 "zone_management": false, 00:16:28.664 "zone_append": false, 00:16:28.664 "compare": false, 00:16:28.664 "compare_and_write": false, 00:16:28.664 "abort": true, 00:16:28.664 "seek_hole": false, 00:16:28.664 "seek_data": false, 00:16:28.664 "copy": true, 00:16:28.664 "nvme_iov_md": false 00:16:28.664 }, 00:16:28.664 "memory_domains": [ 00:16:28.664 { 00:16:28.664 "dma_device_id": "system", 00:16:28.664 "dma_device_type": 1 00:16:28.664 }, 00:16:28.664 { 00:16:28.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.664 "dma_device_type": 2 00:16:28.664 } 00:16:28.664 ], 00:16:28.664 "driver_specific": {} 00:16:28.664 } 00:16:28.664 ] 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.664 "name": "Existed_Raid", 00:16:28.664 "uuid": "e752541c-7dda-4fb5-b640-8fd6e2accd05", 00:16:28.664 "strip_size_kb": 64, 00:16:28.664 "state": "configuring", 00:16:28.664 "raid_level": "raid5f", 00:16:28.664 "superblock": true, 00:16:28.664 "num_base_bdevs": 3, 00:16:28.664 "num_base_bdevs_discovered": 1, 00:16:28.664 "num_base_bdevs_operational": 3, 00:16:28.664 "base_bdevs_list": [ 00:16:28.664 { 00:16:28.664 "name": "BaseBdev1", 00:16:28.664 "uuid": "9ea3ef05-03f0-4f1c-8e43-63a56191f21f", 00:16:28.664 "is_configured": true, 00:16:28.664 "data_offset": 2048, 00:16:28.664 "data_size": 63488 00:16:28.664 }, 00:16:28.664 { 00:16:28.664 "name": "BaseBdev2", 00:16:28.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.664 "is_configured": false, 00:16:28.664 "data_offset": 0, 00:16:28.664 "data_size": 0 00:16:28.664 }, 00:16:28.664 { 00:16:28.664 "name": "BaseBdev3", 00:16:28.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.664 "is_configured": false, 00:16:28.664 "data_offset": 0, 00:16:28.664 "data_size": 0 00:16:28.664 } 00:16:28.664 ] 00:16:28.664 }' 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.664 11:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.924 [2024-11-05 11:32:28.135876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:28.924 [2024-11-05 11:32:28.135937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.924 [2024-11-05 11:32:28.147914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.924 [2024-11-05 11:32:28.149655] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.924 [2024-11-05 11:32:28.149712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.924 [2024-11-05 11:32:28.149721] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:28.924 [2024-11-05 11:32:28.149729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.924 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.924 "name": "Existed_Raid", 00:16:28.924 "uuid": "af37b33e-8954-435f-9e4e-a8986bfee002", 00:16:28.924 "strip_size_kb": 64, 00:16:28.924 "state": "configuring", 00:16:28.924 "raid_level": "raid5f", 00:16:28.924 "superblock": true, 00:16:28.924 "num_base_bdevs": 3, 00:16:28.924 "num_base_bdevs_discovered": 1, 00:16:28.924 "num_base_bdevs_operational": 3, 00:16:28.924 "base_bdevs_list": [ 00:16:28.924 { 00:16:28.924 "name": "BaseBdev1", 00:16:28.924 "uuid": "9ea3ef05-03f0-4f1c-8e43-63a56191f21f", 00:16:28.924 "is_configured": true, 00:16:28.924 "data_offset": 2048, 00:16:28.924 "data_size": 63488 00:16:28.925 }, 00:16:28.925 { 00:16:28.925 "name": "BaseBdev2", 00:16:28.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.925 "is_configured": false, 00:16:28.925 "data_offset": 0, 00:16:28.925 "data_size": 0 00:16:28.925 }, 00:16:28.925 { 00:16:28.925 "name": "BaseBdev3", 00:16:28.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.925 "is_configured": false, 00:16:28.925 "data_offset": 0, 00:16:28.925 "data_size": 0 00:16:28.925 } 00:16:28.925 ] 00:16:28.925 }' 00:16:28.925 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.925 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.495 [2024-11-05 11:32:28.609356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.495 BaseBdev2 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.495 [ 00:16:29.495 { 00:16:29.495 "name": "BaseBdev2", 00:16:29.495 "aliases": [ 00:16:29.495 "b317ad2c-c7d1-418a-bac5-61381c719c31" 00:16:29.495 ], 00:16:29.495 "product_name": "Malloc disk", 00:16:29.495 "block_size": 512, 00:16:29.495 "num_blocks": 65536, 00:16:29.495 "uuid": "b317ad2c-c7d1-418a-bac5-61381c719c31", 00:16:29.495 "assigned_rate_limits": { 00:16:29.495 "rw_ios_per_sec": 0, 00:16:29.495 "rw_mbytes_per_sec": 0, 00:16:29.495 "r_mbytes_per_sec": 0, 00:16:29.495 "w_mbytes_per_sec": 0 00:16:29.495 }, 00:16:29.495 "claimed": true, 00:16:29.495 "claim_type": "exclusive_write", 00:16:29.495 "zoned": false, 00:16:29.495 "supported_io_types": { 00:16:29.495 "read": true, 00:16:29.495 "write": true, 00:16:29.495 "unmap": true, 00:16:29.495 "flush": true, 00:16:29.495 "reset": true, 00:16:29.495 "nvme_admin": false, 00:16:29.495 "nvme_io": false, 00:16:29.495 "nvme_io_md": false, 00:16:29.495 "write_zeroes": true, 00:16:29.495 "zcopy": true, 00:16:29.495 "get_zone_info": false, 00:16:29.495 "zone_management": false, 00:16:29.495 "zone_append": false, 00:16:29.495 "compare": false, 00:16:29.495 "compare_and_write": false, 00:16:29.495 "abort": true, 00:16:29.495 "seek_hole": false, 00:16:29.495 "seek_data": false, 00:16:29.495 "copy": true, 00:16:29.495 "nvme_iov_md": false 00:16:29.495 }, 00:16:29.495 "memory_domains": [ 00:16:29.495 { 00:16:29.495 "dma_device_id": "system", 00:16:29.495 "dma_device_type": 1 00:16:29.495 }, 00:16:29.495 { 00:16:29.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.495 "dma_device_type": 2 00:16:29.495 } 00:16:29.495 ], 00:16:29.495 "driver_specific": {} 00:16:29.495 } 00:16:29.495 ] 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.495 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.495 "name": "Existed_Raid", 00:16:29.495 "uuid": "af37b33e-8954-435f-9e4e-a8986bfee002", 00:16:29.495 "strip_size_kb": 64, 00:16:29.495 "state": "configuring", 00:16:29.495 "raid_level": "raid5f", 00:16:29.495 "superblock": true, 00:16:29.495 "num_base_bdevs": 3, 00:16:29.495 "num_base_bdevs_discovered": 2, 00:16:29.495 "num_base_bdevs_operational": 3, 00:16:29.495 "base_bdevs_list": [ 00:16:29.495 { 00:16:29.495 "name": "BaseBdev1", 00:16:29.495 "uuid": "9ea3ef05-03f0-4f1c-8e43-63a56191f21f", 00:16:29.495 "is_configured": true, 00:16:29.495 "data_offset": 2048, 00:16:29.495 "data_size": 63488 00:16:29.495 }, 00:16:29.495 { 00:16:29.495 "name": "BaseBdev2", 00:16:29.495 "uuid": "b317ad2c-c7d1-418a-bac5-61381c719c31", 00:16:29.495 "is_configured": true, 00:16:29.495 "data_offset": 2048, 00:16:29.495 "data_size": 63488 00:16:29.495 }, 00:16:29.495 { 00:16:29.495 "name": "BaseBdev3", 00:16:29.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.495 "is_configured": false, 00:16:29.495 "data_offset": 0, 00:16:29.495 "data_size": 0 00:16:29.496 } 00:16:29.496 ] 00:16:29.496 }' 00:16:29.496 11:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.496 11:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.065 [2024-11-05 11:32:29.173566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:30.065 [2024-11-05 11:32:29.173830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:30.065 [2024-11-05 11:32:29.173851] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:30.065 [2024-11-05 11:32:29.174126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:30.065 BaseBdev3 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.065 [2024-11-05 11:32:29.179171] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:30.065 [2024-11-05 11:32:29.179195] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:30.065 [2024-11-05 11:32:29.179369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.065 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.065 [ 00:16:30.065 { 00:16:30.065 "name": "BaseBdev3", 00:16:30.065 "aliases": [ 00:16:30.065 "fd69bbfc-0d57-4385-828a-4e6fdb08c1f0" 00:16:30.065 ], 00:16:30.066 "product_name": "Malloc disk", 00:16:30.066 "block_size": 512, 00:16:30.066 "num_blocks": 65536, 00:16:30.066 "uuid": "fd69bbfc-0d57-4385-828a-4e6fdb08c1f0", 00:16:30.066 "assigned_rate_limits": { 00:16:30.066 "rw_ios_per_sec": 0, 00:16:30.066 "rw_mbytes_per_sec": 0, 00:16:30.066 "r_mbytes_per_sec": 0, 00:16:30.066 "w_mbytes_per_sec": 0 00:16:30.066 }, 00:16:30.066 "claimed": true, 00:16:30.066 "claim_type": "exclusive_write", 00:16:30.066 "zoned": false, 00:16:30.066 "supported_io_types": { 00:16:30.066 "read": true, 00:16:30.066 "write": true, 00:16:30.066 "unmap": true, 00:16:30.066 "flush": true, 00:16:30.066 "reset": true, 00:16:30.066 "nvme_admin": false, 00:16:30.066 "nvme_io": false, 00:16:30.066 "nvme_io_md": false, 00:16:30.066 "write_zeroes": true, 00:16:30.066 "zcopy": true, 00:16:30.066 "get_zone_info": false, 00:16:30.066 "zone_management": false, 00:16:30.066 "zone_append": false, 00:16:30.066 "compare": false, 00:16:30.066 "compare_and_write": false, 00:16:30.066 "abort": true, 00:16:30.066 "seek_hole": false, 00:16:30.066 "seek_data": false, 00:16:30.066 "copy": true, 00:16:30.066 "nvme_iov_md": false 00:16:30.066 }, 00:16:30.066 "memory_domains": [ 00:16:30.066 { 00:16:30.066 "dma_device_id": "system", 00:16:30.066 "dma_device_type": 1 00:16:30.066 }, 00:16:30.066 { 00:16:30.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.066 "dma_device_type": 2 00:16:30.066 } 00:16:30.066 ], 00:16:30.066 "driver_specific": {} 00:16:30.066 } 00:16:30.066 ] 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.066 "name": "Existed_Raid", 00:16:30.066 "uuid": "af37b33e-8954-435f-9e4e-a8986bfee002", 00:16:30.066 "strip_size_kb": 64, 00:16:30.066 "state": "online", 00:16:30.066 "raid_level": "raid5f", 00:16:30.066 "superblock": true, 00:16:30.066 "num_base_bdevs": 3, 00:16:30.066 "num_base_bdevs_discovered": 3, 00:16:30.066 "num_base_bdevs_operational": 3, 00:16:30.066 "base_bdevs_list": [ 00:16:30.066 { 00:16:30.066 "name": "BaseBdev1", 00:16:30.066 "uuid": "9ea3ef05-03f0-4f1c-8e43-63a56191f21f", 00:16:30.066 "is_configured": true, 00:16:30.066 "data_offset": 2048, 00:16:30.066 "data_size": 63488 00:16:30.066 }, 00:16:30.066 { 00:16:30.066 "name": "BaseBdev2", 00:16:30.066 "uuid": "b317ad2c-c7d1-418a-bac5-61381c719c31", 00:16:30.066 "is_configured": true, 00:16:30.066 "data_offset": 2048, 00:16:30.066 "data_size": 63488 00:16:30.066 }, 00:16:30.066 { 00:16:30.066 "name": "BaseBdev3", 00:16:30.066 "uuid": "fd69bbfc-0d57-4385-828a-4e6fdb08c1f0", 00:16:30.066 "is_configured": true, 00:16:30.066 "data_offset": 2048, 00:16:30.066 "data_size": 63488 00:16:30.066 } 00:16:30.066 ] 00:16:30.066 }' 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.066 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.632 [2024-11-05 11:32:29.700717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:30.632 "name": "Existed_Raid", 00:16:30.632 "aliases": [ 00:16:30.632 "af37b33e-8954-435f-9e4e-a8986bfee002" 00:16:30.632 ], 00:16:30.632 "product_name": "Raid Volume", 00:16:30.632 "block_size": 512, 00:16:30.632 "num_blocks": 126976, 00:16:30.632 "uuid": "af37b33e-8954-435f-9e4e-a8986bfee002", 00:16:30.632 "assigned_rate_limits": { 00:16:30.632 "rw_ios_per_sec": 0, 00:16:30.632 "rw_mbytes_per_sec": 0, 00:16:30.632 "r_mbytes_per_sec": 0, 00:16:30.632 "w_mbytes_per_sec": 0 00:16:30.632 }, 00:16:30.632 "claimed": false, 00:16:30.632 "zoned": false, 00:16:30.632 "supported_io_types": { 00:16:30.632 "read": true, 00:16:30.632 "write": true, 00:16:30.632 "unmap": false, 00:16:30.632 "flush": false, 00:16:30.632 "reset": true, 00:16:30.632 "nvme_admin": false, 00:16:30.632 "nvme_io": false, 00:16:30.632 "nvme_io_md": false, 00:16:30.632 "write_zeroes": true, 00:16:30.632 "zcopy": false, 00:16:30.632 "get_zone_info": false, 00:16:30.632 "zone_management": false, 00:16:30.632 "zone_append": false, 00:16:30.632 "compare": false, 00:16:30.632 "compare_and_write": false, 00:16:30.632 "abort": false, 00:16:30.632 "seek_hole": false, 00:16:30.632 "seek_data": false, 00:16:30.632 "copy": false, 00:16:30.632 "nvme_iov_md": false 00:16:30.632 }, 00:16:30.632 "driver_specific": { 00:16:30.632 "raid": { 00:16:30.632 "uuid": "af37b33e-8954-435f-9e4e-a8986bfee002", 00:16:30.632 "strip_size_kb": 64, 00:16:30.632 "state": "online", 00:16:30.632 "raid_level": "raid5f", 00:16:30.632 "superblock": true, 00:16:30.632 "num_base_bdevs": 3, 00:16:30.632 "num_base_bdevs_discovered": 3, 00:16:30.632 "num_base_bdevs_operational": 3, 00:16:30.632 "base_bdevs_list": [ 00:16:30.632 { 00:16:30.632 "name": "BaseBdev1", 00:16:30.632 "uuid": "9ea3ef05-03f0-4f1c-8e43-63a56191f21f", 00:16:30.632 "is_configured": true, 00:16:30.632 "data_offset": 2048, 00:16:30.632 "data_size": 63488 00:16:30.632 }, 00:16:30.632 { 00:16:30.632 "name": "BaseBdev2", 00:16:30.632 "uuid": "b317ad2c-c7d1-418a-bac5-61381c719c31", 00:16:30.632 "is_configured": true, 00:16:30.632 "data_offset": 2048, 00:16:30.632 "data_size": 63488 00:16:30.632 }, 00:16:30.632 { 00:16:30.632 "name": "BaseBdev3", 00:16:30.632 "uuid": "fd69bbfc-0d57-4385-828a-4e6fdb08c1f0", 00:16:30.632 "is_configured": true, 00:16:30.632 "data_offset": 2048, 00:16:30.632 "data_size": 63488 00:16:30.632 } 00:16:30.632 ] 00:16:30.632 } 00:16:30.632 } 00:16:30.632 }' 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:30.632 BaseBdev2 00:16:30.632 BaseBdev3' 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.632 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.633 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.633 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.633 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.633 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.633 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:30.633 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.633 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.633 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.892 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.892 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.892 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.892 11:32:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:30.892 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.892 11:32:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.892 [2024-11-05 11:32:29.956119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.892 "name": "Existed_Raid", 00:16:30.892 "uuid": "af37b33e-8954-435f-9e4e-a8986bfee002", 00:16:30.892 "strip_size_kb": 64, 00:16:30.892 "state": "online", 00:16:30.892 "raid_level": "raid5f", 00:16:30.892 "superblock": true, 00:16:30.892 "num_base_bdevs": 3, 00:16:30.892 "num_base_bdevs_discovered": 2, 00:16:30.892 "num_base_bdevs_operational": 2, 00:16:30.892 "base_bdevs_list": [ 00:16:30.892 { 00:16:30.892 "name": null, 00:16:30.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.892 "is_configured": false, 00:16:30.892 "data_offset": 0, 00:16:30.892 "data_size": 63488 00:16:30.892 }, 00:16:30.892 { 00:16:30.892 "name": "BaseBdev2", 00:16:30.892 "uuid": "b317ad2c-c7d1-418a-bac5-61381c719c31", 00:16:30.892 "is_configured": true, 00:16:30.892 "data_offset": 2048, 00:16:30.892 "data_size": 63488 00:16:30.892 }, 00:16:30.892 { 00:16:30.892 "name": "BaseBdev3", 00:16:30.892 "uuid": "fd69bbfc-0d57-4385-828a-4e6fdb08c1f0", 00:16:30.892 "is_configured": true, 00:16:30.892 "data_offset": 2048, 00:16:30.892 "data_size": 63488 00:16:30.892 } 00:16:30.892 ] 00:16:30.892 }' 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.892 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.461 [2024-11-05 11:32:30.513930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:31.461 [2024-11-05 11:32:30.514080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.461 [2024-11-05 11:32:30.601852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.461 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.461 [2024-11-05 11:32:30.657770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:31.461 [2024-11-05 11:32:30.657833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.721 BaseBdev2 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.721 [ 00:16:31.721 { 00:16:31.721 "name": "BaseBdev2", 00:16:31.721 "aliases": [ 00:16:31.721 "1f91e12d-c966-41a2-83fd-475392d73085" 00:16:31.721 ], 00:16:31.721 "product_name": "Malloc disk", 00:16:31.721 "block_size": 512, 00:16:31.721 "num_blocks": 65536, 00:16:31.721 "uuid": "1f91e12d-c966-41a2-83fd-475392d73085", 00:16:31.721 "assigned_rate_limits": { 00:16:31.721 "rw_ios_per_sec": 0, 00:16:31.721 "rw_mbytes_per_sec": 0, 00:16:31.721 "r_mbytes_per_sec": 0, 00:16:31.721 "w_mbytes_per_sec": 0 00:16:31.721 }, 00:16:31.721 "claimed": false, 00:16:31.721 "zoned": false, 00:16:31.721 "supported_io_types": { 00:16:31.721 "read": true, 00:16:31.721 "write": true, 00:16:31.721 "unmap": true, 00:16:31.721 "flush": true, 00:16:31.721 "reset": true, 00:16:31.721 "nvme_admin": false, 00:16:31.721 "nvme_io": false, 00:16:31.721 "nvme_io_md": false, 00:16:31.721 "write_zeroes": true, 00:16:31.721 "zcopy": true, 00:16:31.721 "get_zone_info": false, 00:16:31.721 "zone_management": false, 00:16:31.721 "zone_append": false, 00:16:31.721 "compare": false, 00:16:31.721 "compare_and_write": false, 00:16:31.721 "abort": true, 00:16:31.721 "seek_hole": false, 00:16:31.721 "seek_data": false, 00:16:31.721 "copy": true, 00:16:31.721 "nvme_iov_md": false 00:16:31.721 }, 00:16:31.721 "memory_domains": [ 00:16:31.721 { 00:16:31.721 "dma_device_id": "system", 00:16:31.721 "dma_device_type": 1 00:16:31.721 }, 00:16:31.721 { 00:16:31.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.721 "dma_device_type": 2 00:16:31.721 } 00:16:31.721 ], 00:16:31.721 "driver_specific": {} 00:16:31.721 } 00:16:31.721 ] 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.721 BaseBdev3 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.721 [ 00:16:31.721 { 00:16:31.721 "name": "BaseBdev3", 00:16:31.721 "aliases": [ 00:16:31.721 "93f25b93-ddd6-4c15-b64b-d9dc8e9f5fa0" 00:16:31.721 ], 00:16:31.721 "product_name": "Malloc disk", 00:16:31.721 "block_size": 512, 00:16:31.721 "num_blocks": 65536, 00:16:31.721 "uuid": "93f25b93-ddd6-4c15-b64b-d9dc8e9f5fa0", 00:16:31.721 "assigned_rate_limits": { 00:16:31.721 "rw_ios_per_sec": 0, 00:16:31.721 "rw_mbytes_per_sec": 0, 00:16:31.721 "r_mbytes_per_sec": 0, 00:16:31.721 "w_mbytes_per_sec": 0 00:16:31.721 }, 00:16:31.721 "claimed": false, 00:16:31.721 "zoned": false, 00:16:31.721 "supported_io_types": { 00:16:31.721 "read": true, 00:16:31.721 "write": true, 00:16:31.721 "unmap": true, 00:16:31.721 "flush": true, 00:16:31.721 "reset": true, 00:16:31.721 "nvme_admin": false, 00:16:31.721 "nvme_io": false, 00:16:31.721 "nvme_io_md": false, 00:16:31.721 "write_zeroes": true, 00:16:31.721 "zcopy": true, 00:16:31.721 "get_zone_info": false, 00:16:31.721 "zone_management": false, 00:16:31.721 "zone_append": false, 00:16:31.721 "compare": false, 00:16:31.721 "compare_and_write": false, 00:16:31.721 "abort": true, 00:16:31.721 "seek_hole": false, 00:16:31.721 "seek_data": false, 00:16:31.721 "copy": true, 00:16:31.721 "nvme_iov_md": false 00:16:31.721 }, 00:16:31.721 "memory_domains": [ 00:16:31.721 { 00:16:31.721 "dma_device_id": "system", 00:16:31.721 "dma_device_type": 1 00:16:31.721 }, 00:16:31.721 { 00:16:31.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.721 "dma_device_type": 2 00:16:31.721 } 00:16:31.721 ], 00:16:31.721 "driver_specific": {} 00:16:31.721 } 00:16:31.721 ] 00:16:31.721 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.722 [2024-11-05 11:32:30.964485] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.722 [2024-11-05 11:32:30.964533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.722 [2024-11-05 11:32:30.964554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.722 [2024-11-05 11:32:30.966316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.722 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.981 11:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.981 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.981 "name": "Existed_Raid", 00:16:31.981 "uuid": "6b981c0f-7c2a-4722-bd11-abfc285de112", 00:16:31.981 "strip_size_kb": 64, 00:16:31.981 "state": "configuring", 00:16:31.981 "raid_level": "raid5f", 00:16:31.981 "superblock": true, 00:16:31.981 "num_base_bdevs": 3, 00:16:31.981 "num_base_bdevs_discovered": 2, 00:16:31.981 "num_base_bdevs_operational": 3, 00:16:31.981 "base_bdevs_list": [ 00:16:31.981 { 00:16:31.981 "name": "BaseBdev1", 00:16:31.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.981 "is_configured": false, 00:16:31.981 "data_offset": 0, 00:16:31.981 "data_size": 0 00:16:31.981 }, 00:16:31.981 { 00:16:31.981 "name": "BaseBdev2", 00:16:31.981 "uuid": "1f91e12d-c966-41a2-83fd-475392d73085", 00:16:31.981 "is_configured": true, 00:16:31.981 "data_offset": 2048, 00:16:31.981 "data_size": 63488 00:16:31.981 }, 00:16:31.981 { 00:16:31.981 "name": "BaseBdev3", 00:16:31.981 "uuid": "93f25b93-ddd6-4c15-b64b-d9dc8e9f5fa0", 00:16:31.981 "is_configured": true, 00:16:31.981 "data_offset": 2048, 00:16:31.981 "data_size": 63488 00:16:31.981 } 00:16:31.981 ] 00:16:31.981 }' 00:16:31.981 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.981 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.241 [2024-11-05 11:32:31.395702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.241 "name": "Existed_Raid", 00:16:32.241 "uuid": "6b981c0f-7c2a-4722-bd11-abfc285de112", 00:16:32.241 "strip_size_kb": 64, 00:16:32.241 "state": "configuring", 00:16:32.241 "raid_level": "raid5f", 00:16:32.241 "superblock": true, 00:16:32.241 "num_base_bdevs": 3, 00:16:32.241 "num_base_bdevs_discovered": 1, 00:16:32.241 "num_base_bdevs_operational": 3, 00:16:32.241 "base_bdevs_list": [ 00:16:32.241 { 00:16:32.241 "name": "BaseBdev1", 00:16:32.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.241 "is_configured": false, 00:16:32.241 "data_offset": 0, 00:16:32.241 "data_size": 0 00:16:32.241 }, 00:16:32.241 { 00:16:32.241 "name": null, 00:16:32.241 "uuid": "1f91e12d-c966-41a2-83fd-475392d73085", 00:16:32.241 "is_configured": false, 00:16:32.241 "data_offset": 0, 00:16:32.241 "data_size": 63488 00:16:32.241 }, 00:16:32.241 { 00:16:32.241 "name": "BaseBdev3", 00:16:32.241 "uuid": "93f25b93-ddd6-4c15-b64b-d9dc8e9f5fa0", 00:16:32.241 "is_configured": true, 00:16:32.241 "data_offset": 2048, 00:16:32.241 "data_size": 63488 00:16:32.241 } 00:16:32.241 ] 00:16:32.241 }' 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.241 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.810 [2024-11-05 11:32:31.933089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.810 BaseBdev1 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.810 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.810 [ 00:16:32.810 { 00:16:32.810 "name": "BaseBdev1", 00:16:32.810 "aliases": [ 00:16:32.810 "3619cb5d-049b-45f0-a88a-323e6eb79ef8" 00:16:32.810 ], 00:16:32.810 "product_name": "Malloc disk", 00:16:32.810 "block_size": 512, 00:16:32.810 "num_blocks": 65536, 00:16:32.810 "uuid": "3619cb5d-049b-45f0-a88a-323e6eb79ef8", 00:16:32.810 "assigned_rate_limits": { 00:16:32.810 "rw_ios_per_sec": 0, 00:16:32.810 "rw_mbytes_per_sec": 0, 00:16:32.810 "r_mbytes_per_sec": 0, 00:16:32.810 "w_mbytes_per_sec": 0 00:16:32.810 }, 00:16:32.810 "claimed": true, 00:16:32.810 "claim_type": "exclusive_write", 00:16:32.810 "zoned": false, 00:16:32.810 "supported_io_types": { 00:16:32.810 "read": true, 00:16:32.810 "write": true, 00:16:32.810 "unmap": true, 00:16:32.810 "flush": true, 00:16:32.810 "reset": true, 00:16:32.810 "nvme_admin": false, 00:16:32.810 "nvme_io": false, 00:16:32.810 "nvme_io_md": false, 00:16:32.810 "write_zeroes": true, 00:16:32.810 "zcopy": true, 00:16:32.810 "get_zone_info": false, 00:16:32.810 "zone_management": false, 00:16:32.810 "zone_append": false, 00:16:32.810 "compare": false, 00:16:32.810 "compare_and_write": false, 00:16:32.810 "abort": true, 00:16:32.810 "seek_hole": false, 00:16:32.810 "seek_data": false, 00:16:32.810 "copy": true, 00:16:32.810 "nvme_iov_md": false 00:16:32.810 }, 00:16:32.810 "memory_domains": [ 00:16:32.810 { 00:16:32.810 "dma_device_id": "system", 00:16:32.811 "dma_device_type": 1 00:16:32.811 }, 00:16:32.811 { 00:16:32.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.811 "dma_device_type": 2 00:16:32.811 } 00:16:32.811 ], 00:16:32.811 "driver_specific": {} 00:16:32.811 } 00:16:32.811 ] 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.811 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.811 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.811 "name": "Existed_Raid", 00:16:32.811 "uuid": "6b981c0f-7c2a-4722-bd11-abfc285de112", 00:16:32.811 "strip_size_kb": 64, 00:16:32.811 "state": "configuring", 00:16:32.811 "raid_level": "raid5f", 00:16:32.811 "superblock": true, 00:16:32.811 "num_base_bdevs": 3, 00:16:32.811 "num_base_bdevs_discovered": 2, 00:16:32.811 "num_base_bdevs_operational": 3, 00:16:32.811 "base_bdevs_list": [ 00:16:32.811 { 00:16:32.811 "name": "BaseBdev1", 00:16:32.811 "uuid": "3619cb5d-049b-45f0-a88a-323e6eb79ef8", 00:16:32.811 "is_configured": true, 00:16:32.811 "data_offset": 2048, 00:16:32.811 "data_size": 63488 00:16:32.811 }, 00:16:32.811 { 00:16:32.811 "name": null, 00:16:32.811 "uuid": "1f91e12d-c966-41a2-83fd-475392d73085", 00:16:32.811 "is_configured": false, 00:16:32.811 "data_offset": 0, 00:16:32.811 "data_size": 63488 00:16:32.811 }, 00:16:32.811 { 00:16:32.811 "name": "BaseBdev3", 00:16:32.811 "uuid": "93f25b93-ddd6-4c15-b64b-d9dc8e9f5fa0", 00:16:32.811 "is_configured": true, 00:16:32.811 "data_offset": 2048, 00:16:32.811 "data_size": 63488 00:16:32.811 } 00:16:32.811 ] 00:16:32.811 }' 00:16:32.811 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.811 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.379 [2024-11-05 11:32:32.404291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.379 "name": "Existed_Raid", 00:16:33.379 "uuid": "6b981c0f-7c2a-4722-bd11-abfc285de112", 00:16:33.379 "strip_size_kb": 64, 00:16:33.379 "state": "configuring", 00:16:33.379 "raid_level": "raid5f", 00:16:33.379 "superblock": true, 00:16:33.379 "num_base_bdevs": 3, 00:16:33.379 "num_base_bdevs_discovered": 1, 00:16:33.379 "num_base_bdevs_operational": 3, 00:16:33.379 "base_bdevs_list": [ 00:16:33.379 { 00:16:33.379 "name": "BaseBdev1", 00:16:33.379 "uuid": "3619cb5d-049b-45f0-a88a-323e6eb79ef8", 00:16:33.379 "is_configured": true, 00:16:33.379 "data_offset": 2048, 00:16:33.379 "data_size": 63488 00:16:33.379 }, 00:16:33.379 { 00:16:33.379 "name": null, 00:16:33.379 "uuid": "1f91e12d-c966-41a2-83fd-475392d73085", 00:16:33.379 "is_configured": false, 00:16:33.379 "data_offset": 0, 00:16:33.379 "data_size": 63488 00:16:33.379 }, 00:16:33.379 { 00:16:33.379 "name": null, 00:16:33.379 "uuid": "93f25b93-ddd6-4c15-b64b-d9dc8e9f5fa0", 00:16:33.379 "is_configured": false, 00:16:33.379 "data_offset": 0, 00:16:33.379 "data_size": 63488 00:16:33.379 } 00:16:33.379 ] 00:16:33.379 }' 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.379 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.639 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.639 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:33.639 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.639 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.639 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.639 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:33.639 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:33.639 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.639 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.639 [2024-11-05 11:32:32.871504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.639 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.639 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:33.639 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.639 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.639 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.640 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.640 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.640 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.640 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.640 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.640 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.640 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.640 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.640 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.640 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.640 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.640 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.640 "name": "Existed_Raid", 00:16:33.640 "uuid": "6b981c0f-7c2a-4722-bd11-abfc285de112", 00:16:33.640 "strip_size_kb": 64, 00:16:33.640 "state": "configuring", 00:16:33.640 "raid_level": "raid5f", 00:16:33.640 "superblock": true, 00:16:33.640 "num_base_bdevs": 3, 00:16:33.640 "num_base_bdevs_discovered": 2, 00:16:33.640 "num_base_bdevs_operational": 3, 00:16:33.640 "base_bdevs_list": [ 00:16:33.640 { 00:16:33.640 "name": "BaseBdev1", 00:16:33.640 "uuid": "3619cb5d-049b-45f0-a88a-323e6eb79ef8", 00:16:33.640 "is_configured": true, 00:16:33.640 "data_offset": 2048, 00:16:33.640 "data_size": 63488 00:16:33.640 }, 00:16:33.640 { 00:16:33.640 "name": null, 00:16:33.640 "uuid": "1f91e12d-c966-41a2-83fd-475392d73085", 00:16:33.640 "is_configured": false, 00:16:33.640 "data_offset": 0, 00:16:33.640 "data_size": 63488 00:16:33.640 }, 00:16:33.640 { 00:16:33.640 "name": "BaseBdev3", 00:16:33.640 "uuid": "93f25b93-ddd6-4c15-b64b-d9dc8e9f5fa0", 00:16:33.640 "is_configured": true, 00:16:33.640 "data_offset": 2048, 00:16:33.640 "data_size": 63488 00:16:33.640 } 00:16:33.640 ] 00:16:33.640 }' 00:16:33.640 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.640 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.209 [2024-11-05 11:32:33.335055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.209 "name": "Existed_Raid", 00:16:34.209 "uuid": "6b981c0f-7c2a-4722-bd11-abfc285de112", 00:16:34.209 "strip_size_kb": 64, 00:16:34.209 "state": "configuring", 00:16:34.209 "raid_level": "raid5f", 00:16:34.209 "superblock": true, 00:16:34.209 "num_base_bdevs": 3, 00:16:34.209 "num_base_bdevs_discovered": 1, 00:16:34.209 "num_base_bdevs_operational": 3, 00:16:34.209 "base_bdevs_list": [ 00:16:34.209 { 00:16:34.209 "name": null, 00:16:34.209 "uuid": "3619cb5d-049b-45f0-a88a-323e6eb79ef8", 00:16:34.209 "is_configured": false, 00:16:34.209 "data_offset": 0, 00:16:34.209 "data_size": 63488 00:16:34.209 }, 00:16:34.209 { 00:16:34.209 "name": null, 00:16:34.209 "uuid": "1f91e12d-c966-41a2-83fd-475392d73085", 00:16:34.209 "is_configured": false, 00:16:34.209 "data_offset": 0, 00:16:34.209 "data_size": 63488 00:16:34.209 }, 00:16:34.209 { 00:16:34.209 "name": "BaseBdev3", 00:16:34.209 "uuid": "93f25b93-ddd6-4c15-b64b-d9dc8e9f5fa0", 00:16:34.209 "is_configured": true, 00:16:34.209 "data_offset": 2048, 00:16:34.209 "data_size": 63488 00:16:34.209 } 00:16:34.209 ] 00:16:34.209 }' 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.209 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.778 [2024-11-05 11:32:33.889637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.778 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.778 "name": "Existed_Raid", 00:16:34.778 "uuid": "6b981c0f-7c2a-4722-bd11-abfc285de112", 00:16:34.778 "strip_size_kb": 64, 00:16:34.778 "state": "configuring", 00:16:34.778 "raid_level": "raid5f", 00:16:34.778 "superblock": true, 00:16:34.778 "num_base_bdevs": 3, 00:16:34.778 "num_base_bdevs_discovered": 2, 00:16:34.778 "num_base_bdevs_operational": 3, 00:16:34.778 "base_bdevs_list": [ 00:16:34.778 { 00:16:34.778 "name": null, 00:16:34.778 "uuid": "3619cb5d-049b-45f0-a88a-323e6eb79ef8", 00:16:34.779 "is_configured": false, 00:16:34.779 "data_offset": 0, 00:16:34.779 "data_size": 63488 00:16:34.779 }, 00:16:34.779 { 00:16:34.779 "name": "BaseBdev2", 00:16:34.779 "uuid": "1f91e12d-c966-41a2-83fd-475392d73085", 00:16:34.779 "is_configured": true, 00:16:34.779 "data_offset": 2048, 00:16:34.779 "data_size": 63488 00:16:34.779 }, 00:16:34.779 { 00:16:34.779 "name": "BaseBdev3", 00:16:34.779 "uuid": "93f25b93-ddd6-4c15-b64b-d9dc8e9f5fa0", 00:16:34.779 "is_configured": true, 00:16:34.779 "data_offset": 2048, 00:16:34.779 "data_size": 63488 00:16:34.779 } 00:16:34.779 ] 00:16:34.779 }' 00:16:34.779 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.779 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3619cb5d-049b-45f0-a88a-323e6eb79ef8 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.348 [2024-11-05 11:32:34.480298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:35.348 [2024-11-05 11:32:34.480520] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:35.348 [2024-11-05 11:32:34.480535] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:35.348 [2024-11-05 11:32:34.480789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:35.348 NewBaseBdev 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.348 [2024-11-05 11:32:34.485862] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:35.348 [2024-11-05 11:32:34.485888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:35.348 [2024-11-05 11:32:34.486027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.348 [ 00:16:35.348 { 00:16:35.348 "name": "NewBaseBdev", 00:16:35.348 "aliases": [ 00:16:35.348 "3619cb5d-049b-45f0-a88a-323e6eb79ef8" 00:16:35.348 ], 00:16:35.348 "product_name": "Malloc disk", 00:16:35.348 "block_size": 512, 00:16:35.348 "num_blocks": 65536, 00:16:35.348 "uuid": "3619cb5d-049b-45f0-a88a-323e6eb79ef8", 00:16:35.348 "assigned_rate_limits": { 00:16:35.348 "rw_ios_per_sec": 0, 00:16:35.348 "rw_mbytes_per_sec": 0, 00:16:35.348 "r_mbytes_per_sec": 0, 00:16:35.348 "w_mbytes_per_sec": 0 00:16:35.348 }, 00:16:35.348 "claimed": true, 00:16:35.348 "claim_type": "exclusive_write", 00:16:35.348 "zoned": false, 00:16:35.348 "supported_io_types": { 00:16:35.348 "read": true, 00:16:35.348 "write": true, 00:16:35.348 "unmap": true, 00:16:35.348 "flush": true, 00:16:35.348 "reset": true, 00:16:35.348 "nvme_admin": false, 00:16:35.348 "nvme_io": false, 00:16:35.348 "nvme_io_md": false, 00:16:35.348 "write_zeroes": true, 00:16:35.348 "zcopy": true, 00:16:35.348 "get_zone_info": false, 00:16:35.348 "zone_management": false, 00:16:35.348 "zone_append": false, 00:16:35.348 "compare": false, 00:16:35.348 "compare_and_write": false, 00:16:35.348 "abort": true, 00:16:35.348 "seek_hole": false, 00:16:35.348 "seek_data": false, 00:16:35.348 "copy": true, 00:16:35.348 "nvme_iov_md": false 00:16:35.348 }, 00:16:35.348 "memory_domains": [ 00:16:35.348 { 00:16:35.348 "dma_device_id": "system", 00:16:35.348 "dma_device_type": 1 00:16:35.348 }, 00:16:35.348 { 00:16:35.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.348 "dma_device_type": 2 00:16:35.348 } 00:16:35.348 ], 00:16:35.348 "driver_specific": {} 00:16:35.348 } 00:16:35.348 ] 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.348 "name": "Existed_Raid", 00:16:35.348 "uuid": "6b981c0f-7c2a-4722-bd11-abfc285de112", 00:16:35.348 "strip_size_kb": 64, 00:16:35.348 "state": "online", 00:16:35.348 "raid_level": "raid5f", 00:16:35.348 "superblock": true, 00:16:35.348 "num_base_bdevs": 3, 00:16:35.348 "num_base_bdevs_discovered": 3, 00:16:35.348 "num_base_bdevs_operational": 3, 00:16:35.348 "base_bdevs_list": [ 00:16:35.348 { 00:16:35.348 "name": "NewBaseBdev", 00:16:35.348 "uuid": "3619cb5d-049b-45f0-a88a-323e6eb79ef8", 00:16:35.348 "is_configured": true, 00:16:35.348 "data_offset": 2048, 00:16:35.348 "data_size": 63488 00:16:35.348 }, 00:16:35.348 { 00:16:35.348 "name": "BaseBdev2", 00:16:35.348 "uuid": "1f91e12d-c966-41a2-83fd-475392d73085", 00:16:35.348 "is_configured": true, 00:16:35.348 "data_offset": 2048, 00:16:35.348 "data_size": 63488 00:16:35.348 }, 00:16:35.348 { 00:16:35.348 "name": "BaseBdev3", 00:16:35.348 "uuid": "93f25b93-ddd6-4c15-b64b-d9dc8e9f5fa0", 00:16:35.348 "is_configured": true, 00:16:35.348 "data_offset": 2048, 00:16:35.348 "data_size": 63488 00:16:35.348 } 00:16:35.348 ] 00:16:35.348 }' 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.348 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.916 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:35.916 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:35.916 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:35.916 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:35.916 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:35.916 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:35.916 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:35.916 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:35.916 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.916 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.916 [2024-11-05 11:32:34.947389] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.916 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.916 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:35.916 "name": "Existed_Raid", 00:16:35.916 "aliases": [ 00:16:35.916 "6b981c0f-7c2a-4722-bd11-abfc285de112" 00:16:35.916 ], 00:16:35.916 "product_name": "Raid Volume", 00:16:35.916 "block_size": 512, 00:16:35.916 "num_blocks": 126976, 00:16:35.916 "uuid": "6b981c0f-7c2a-4722-bd11-abfc285de112", 00:16:35.916 "assigned_rate_limits": { 00:16:35.916 "rw_ios_per_sec": 0, 00:16:35.916 "rw_mbytes_per_sec": 0, 00:16:35.916 "r_mbytes_per_sec": 0, 00:16:35.916 "w_mbytes_per_sec": 0 00:16:35.916 }, 00:16:35.916 "claimed": false, 00:16:35.916 "zoned": false, 00:16:35.916 "supported_io_types": { 00:16:35.916 "read": true, 00:16:35.916 "write": true, 00:16:35.916 "unmap": false, 00:16:35.916 "flush": false, 00:16:35.916 "reset": true, 00:16:35.916 "nvme_admin": false, 00:16:35.916 "nvme_io": false, 00:16:35.916 "nvme_io_md": false, 00:16:35.916 "write_zeroes": true, 00:16:35.916 "zcopy": false, 00:16:35.916 "get_zone_info": false, 00:16:35.916 "zone_management": false, 00:16:35.916 "zone_append": false, 00:16:35.916 "compare": false, 00:16:35.916 "compare_and_write": false, 00:16:35.916 "abort": false, 00:16:35.916 "seek_hole": false, 00:16:35.916 "seek_data": false, 00:16:35.916 "copy": false, 00:16:35.916 "nvme_iov_md": false 00:16:35.916 }, 00:16:35.916 "driver_specific": { 00:16:35.916 "raid": { 00:16:35.916 "uuid": "6b981c0f-7c2a-4722-bd11-abfc285de112", 00:16:35.916 "strip_size_kb": 64, 00:16:35.916 "state": "online", 00:16:35.916 "raid_level": "raid5f", 00:16:35.916 "superblock": true, 00:16:35.916 "num_base_bdevs": 3, 00:16:35.916 "num_base_bdevs_discovered": 3, 00:16:35.916 "num_base_bdevs_operational": 3, 00:16:35.916 "base_bdevs_list": [ 00:16:35.916 { 00:16:35.916 "name": "NewBaseBdev", 00:16:35.916 "uuid": "3619cb5d-049b-45f0-a88a-323e6eb79ef8", 00:16:35.916 "is_configured": true, 00:16:35.916 "data_offset": 2048, 00:16:35.916 "data_size": 63488 00:16:35.916 }, 00:16:35.916 { 00:16:35.916 "name": "BaseBdev2", 00:16:35.916 "uuid": "1f91e12d-c966-41a2-83fd-475392d73085", 00:16:35.916 "is_configured": true, 00:16:35.916 "data_offset": 2048, 00:16:35.916 "data_size": 63488 00:16:35.916 }, 00:16:35.916 { 00:16:35.916 "name": "BaseBdev3", 00:16:35.916 "uuid": "93f25b93-ddd6-4c15-b64b-d9dc8e9f5fa0", 00:16:35.916 "is_configured": true, 00:16:35.916 "data_offset": 2048, 00:16:35.916 "data_size": 63488 00:16:35.916 } 00:16:35.916 ] 00:16:35.916 } 00:16:35.916 } 00:16:35.916 }' 00:16:35.916 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:35.916 BaseBdev2 00:16:35.916 BaseBdev3' 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.916 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.176 [2024-11-05 11:32:35.226743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.176 [2024-11-05 11:32:35.226773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.176 [2024-11-05 11:32:35.226840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.176 [2024-11-05 11:32:35.227155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.176 [2024-11-05 11:32:35.227174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80555 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 80555 ']' 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 80555 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80555 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:36.176 killing process with pid 80555 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80555' 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 80555 00:16:36.176 [2024-11-05 11:32:35.276500] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:36.176 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 80555 00:16:36.436 [2024-11-05 11:32:35.556112] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.374 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:37.374 00:16:37.374 real 0m10.333s 00:16:37.374 user 0m16.452s 00:16:37.374 sys 0m1.887s 00:16:37.374 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:37.374 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.374 ************************************ 00:16:37.374 END TEST raid5f_state_function_test_sb 00:16:37.374 ************************************ 00:16:37.635 11:32:36 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:16:37.635 11:32:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:37.635 11:32:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:37.635 11:32:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.635 ************************************ 00:16:37.635 START TEST raid5f_superblock_test 00:16:37.635 ************************************ 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81171 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81171 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81171 ']' 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:37.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:37.635 11:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.635 [2024-11-05 11:32:36.764584] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:16:37.635 [2024-11-05 11:32:36.764708] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81171 ] 00:16:37.895 [2024-11-05 11:32:36.936583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.895 [2024-11-05 11:32:37.038898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.155 [2024-11-05 11:32:37.227564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.155 [2024-11-05 11:32:37.227604] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.414 malloc1 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.414 [2024-11-05 11:32:37.622899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:38.414 [2024-11-05 11:32:37.623024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.414 [2024-11-05 11:32:37.623052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:38.414 [2024-11-05 11:32:37.623062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.414 [2024-11-05 11:32:37.625195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.414 [2024-11-05 11:32:37.625229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:38.414 pt1 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.414 malloc2 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.414 [2024-11-05 11:32:37.671349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:38.414 [2024-11-05 11:32:37.671454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.414 [2024-11-05 11:32:37.671493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:38.414 [2024-11-05 11:32:37.671520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.414 [2024-11-05 11:32:37.673484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.414 [2024-11-05 11:32:37.673549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:38.414 pt2 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.414 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.674 malloc3 00:16:38.674 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.674 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:38.674 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.674 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.674 [2024-11-05 11:32:37.759662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:38.674 [2024-11-05 11:32:37.759764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.674 [2024-11-05 11:32:37.759801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:38.674 [2024-11-05 11:32:37.759828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.674 [2024-11-05 11:32:37.761808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.674 [2024-11-05 11:32:37.761875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:38.674 pt3 00:16:38.674 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.674 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:38.674 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.674 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:38.674 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.674 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.674 [2024-11-05 11:32:37.771696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:38.674 [2024-11-05 11:32:37.773469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.674 [2024-11-05 11:32:37.773583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:38.674 [2024-11-05 11:32:37.773780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:38.674 [2024-11-05 11:32:37.773832] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:38.674 [2024-11-05 11:32:37.774068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:38.674 [2024-11-05 11:32:37.779637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:38.674 [2024-11-05 11:32:37.779690] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:38.674 [2024-11-05 11:32:37.779897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.674 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.674 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.675 "name": "raid_bdev1", 00:16:38.675 "uuid": "efdd0159-0815-43de-b170-a8c8aa598fe6", 00:16:38.675 "strip_size_kb": 64, 00:16:38.675 "state": "online", 00:16:38.675 "raid_level": "raid5f", 00:16:38.675 "superblock": true, 00:16:38.675 "num_base_bdevs": 3, 00:16:38.675 "num_base_bdevs_discovered": 3, 00:16:38.675 "num_base_bdevs_operational": 3, 00:16:38.675 "base_bdevs_list": [ 00:16:38.675 { 00:16:38.675 "name": "pt1", 00:16:38.675 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.675 "is_configured": true, 00:16:38.675 "data_offset": 2048, 00:16:38.675 "data_size": 63488 00:16:38.675 }, 00:16:38.675 { 00:16:38.675 "name": "pt2", 00:16:38.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.675 "is_configured": true, 00:16:38.675 "data_offset": 2048, 00:16:38.675 "data_size": 63488 00:16:38.675 }, 00:16:38.675 { 00:16:38.675 "name": "pt3", 00:16:38.675 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.675 "is_configured": true, 00:16:38.675 "data_offset": 2048, 00:16:38.675 "data_size": 63488 00:16:38.675 } 00:16:38.675 ] 00:16:38.675 }' 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.675 11:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:39.244 [2024-11-05 11:32:38.241500] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:39.244 "name": "raid_bdev1", 00:16:39.244 "aliases": [ 00:16:39.244 "efdd0159-0815-43de-b170-a8c8aa598fe6" 00:16:39.244 ], 00:16:39.244 "product_name": "Raid Volume", 00:16:39.244 "block_size": 512, 00:16:39.244 "num_blocks": 126976, 00:16:39.244 "uuid": "efdd0159-0815-43de-b170-a8c8aa598fe6", 00:16:39.244 "assigned_rate_limits": { 00:16:39.244 "rw_ios_per_sec": 0, 00:16:39.244 "rw_mbytes_per_sec": 0, 00:16:39.244 "r_mbytes_per_sec": 0, 00:16:39.244 "w_mbytes_per_sec": 0 00:16:39.244 }, 00:16:39.244 "claimed": false, 00:16:39.244 "zoned": false, 00:16:39.244 "supported_io_types": { 00:16:39.244 "read": true, 00:16:39.244 "write": true, 00:16:39.244 "unmap": false, 00:16:39.244 "flush": false, 00:16:39.244 "reset": true, 00:16:39.244 "nvme_admin": false, 00:16:39.244 "nvme_io": false, 00:16:39.244 "nvme_io_md": false, 00:16:39.244 "write_zeroes": true, 00:16:39.244 "zcopy": false, 00:16:39.244 "get_zone_info": false, 00:16:39.244 "zone_management": false, 00:16:39.244 "zone_append": false, 00:16:39.244 "compare": false, 00:16:39.244 "compare_and_write": false, 00:16:39.244 "abort": false, 00:16:39.244 "seek_hole": false, 00:16:39.244 "seek_data": false, 00:16:39.244 "copy": false, 00:16:39.244 "nvme_iov_md": false 00:16:39.244 }, 00:16:39.244 "driver_specific": { 00:16:39.244 "raid": { 00:16:39.244 "uuid": "efdd0159-0815-43de-b170-a8c8aa598fe6", 00:16:39.244 "strip_size_kb": 64, 00:16:39.244 "state": "online", 00:16:39.244 "raid_level": "raid5f", 00:16:39.244 "superblock": true, 00:16:39.244 "num_base_bdevs": 3, 00:16:39.244 "num_base_bdevs_discovered": 3, 00:16:39.244 "num_base_bdevs_operational": 3, 00:16:39.244 "base_bdevs_list": [ 00:16:39.244 { 00:16:39.244 "name": "pt1", 00:16:39.244 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.244 "is_configured": true, 00:16:39.244 "data_offset": 2048, 00:16:39.244 "data_size": 63488 00:16:39.244 }, 00:16:39.244 { 00:16:39.244 "name": "pt2", 00:16:39.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.244 "is_configured": true, 00:16:39.244 "data_offset": 2048, 00:16:39.244 "data_size": 63488 00:16:39.244 }, 00:16:39.244 { 00:16:39.244 "name": "pt3", 00:16:39.244 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:39.244 "is_configured": true, 00:16:39.244 "data_offset": 2048, 00:16:39.244 "data_size": 63488 00:16:39.244 } 00:16:39.244 ] 00:16:39.244 } 00:16:39.244 } 00:16:39.244 }' 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:39.244 pt2 00:16:39.244 pt3' 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.244 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:39.245 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.245 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.245 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.245 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.245 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.245 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.245 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:39.245 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.245 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.245 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.245 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.504 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.504 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.504 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.504 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.504 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.504 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:39.504 [2024-11-05 11:32:38.544966] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.504 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.504 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=efdd0159-0815-43de-b170-a8c8aa598fe6 00:16:39.504 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z efdd0159-0815-43de-b170-a8c8aa598fe6 ']' 00:16:39.504 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:39.504 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.504 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.504 [2024-11-05 11:32:38.592720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.505 [2024-11-05 11:32:38.592744] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.505 [2024-11-05 11:32:38.592811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.505 [2024-11-05 11:32:38.592880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.505 [2024-11-05 11:32:38.592889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.505 [2024-11-05 11:32:38.744510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:39.505 [2024-11-05 11:32:38.746199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:39.505 [2024-11-05 11:32:38.746252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:39.505 [2024-11-05 11:32:38.746300] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:39.505 [2024-11-05 11:32:38.746369] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:39.505 [2024-11-05 11:32:38.746387] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:39.505 [2024-11-05 11:32:38.746403] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.505 [2024-11-05 11:32:38.746412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:39.505 request: 00:16:39.505 { 00:16:39.505 "name": "raid_bdev1", 00:16:39.505 "raid_level": "raid5f", 00:16:39.505 "base_bdevs": [ 00:16:39.505 "malloc1", 00:16:39.505 "malloc2", 00:16:39.505 "malloc3" 00:16:39.505 ], 00:16:39.505 "strip_size_kb": 64, 00:16:39.505 "superblock": false, 00:16:39.505 "method": "bdev_raid_create", 00:16:39.505 "req_id": 1 00:16:39.505 } 00:16:39.505 Got JSON-RPC error response 00:16:39.505 response: 00:16:39.505 { 00:16:39.505 "code": -17, 00:16:39.505 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:39.505 } 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.505 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.765 [2024-11-05 11:32:38.796364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:39.765 [2024-11-05 11:32:38.796476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.765 [2024-11-05 11:32:38.796511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:39.765 [2024-11-05 11:32:38.796538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.765 [2024-11-05 11:32:38.798572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.765 [2024-11-05 11:32:38.798640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:39.765 [2024-11-05 11:32:38.798746] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:39.765 [2024-11-05 11:32:38.798804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:39.765 pt1 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.765 "name": "raid_bdev1", 00:16:39.765 "uuid": "efdd0159-0815-43de-b170-a8c8aa598fe6", 00:16:39.765 "strip_size_kb": 64, 00:16:39.765 "state": "configuring", 00:16:39.765 "raid_level": "raid5f", 00:16:39.765 "superblock": true, 00:16:39.765 "num_base_bdevs": 3, 00:16:39.765 "num_base_bdevs_discovered": 1, 00:16:39.765 "num_base_bdevs_operational": 3, 00:16:39.765 "base_bdevs_list": [ 00:16:39.765 { 00:16:39.765 "name": "pt1", 00:16:39.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.765 "is_configured": true, 00:16:39.765 "data_offset": 2048, 00:16:39.765 "data_size": 63488 00:16:39.765 }, 00:16:39.765 { 00:16:39.765 "name": null, 00:16:39.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.765 "is_configured": false, 00:16:39.765 "data_offset": 2048, 00:16:39.765 "data_size": 63488 00:16:39.765 }, 00:16:39.765 { 00:16:39.765 "name": null, 00:16:39.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:39.765 "is_configured": false, 00:16:39.765 "data_offset": 2048, 00:16:39.765 "data_size": 63488 00:16:39.765 } 00:16:39.765 ] 00:16:39.765 }' 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.765 11:32:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.025 [2024-11-05 11:32:39.235623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.025 [2024-11-05 11:32:39.235670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.025 [2024-11-05 11:32:39.235705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:40.025 [2024-11-05 11:32:39.235714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.025 [2024-11-05 11:32:39.236084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.025 [2024-11-05 11:32:39.236118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.025 [2024-11-05 11:32:39.236196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:40.025 [2024-11-05 11:32:39.236215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.025 pt2 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.025 [2024-11-05 11:32:39.247620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.025 "name": "raid_bdev1", 00:16:40.025 "uuid": "efdd0159-0815-43de-b170-a8c8aa598fe6", 00:16:40.025 "strip_size_kb": 64, 00:16:40.025 "state": "configuring", 00:16:40.025 "raid_level": "raid5f", 00:16:40.025 "superblock": true, 00:16:40.025 "num_base_bdevs": 3, 00:16:40.025 "num_base_bdevs_discovered": 1, 00:16:40.025 "num_base_bdevs_operational": 3, 00:16:40.025 "base_bdevs_list": [ 00:16:40.025 { 00:16:40.025 "name": "pt1", 00:16:40.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:40.025 "is_configured": true, 00:16:40.025 "data_offset": 2048, 00:16:40.025 "data_size": 63488 00:16:40.025 }, 00:16:40.025 { 00:16:40.025 "name": null, 00:16:40.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.025 "is_configured": false, 00:16:40.025 "data_offset": 0, 00:16:40.025 "data_size": 63488 00:16:40.025 }, 00:16:40.025 { 00:16:40.025 "name": null, 00:16:40.025 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:40.025 "is_configured": false, 00:16:40.025 "data_offset": 2048, 00:16:40.025 "data_size": 63488 00:16:40.025 } 00:16:40.025 ] 00:16:40.025 }' 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.025 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.595 [2024-11-05 11:32:39.730802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.595 [2024-11-05 11:32:39.730906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.595 [2024-11-05 11:32:39.730941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:40.595 [2024-11-05 11:32:39.730970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.595 [2024-11-05 11:32:39.731439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.595 [2024-11-05 11:32:39.731502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.595 [2024-11-05 11:32:39.731604] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:40.595 [2024-11-05 11:32:39.731656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.595 pt2 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.595 [2024-11-05 11:32:39.742788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:40.595 [2024-11-05 11:32:39.742870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.595 [2024-11-05 11:32:39.742899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:40.595 [2024-11-05 11:32:39.742924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.595 [2024-11-05 11:32:39.743356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.595 [2024-11-05 11:32:39.743427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:40.595 [2024-11-05 11:32:39.743510] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:40.595 [2024-11-05 11:32:39.743558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:40.595 [2024-11-05 11:32:39.743696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:40.595 [2024-11-05 11:32:39.743736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:40.595 [2024-11-05 11:32:39.743984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:40.595 [2024-11-05 11:32:39.749402] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:40.595 [2024-11-05 11:32:39.749458] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:40.595 [2024-11-05 11:32:39.749674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.595 pt3 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.595 "name": "raid_bdev1", 00:16:40.595 "uuid": "efdd0159-0815-43de-b170-a8c8aa598fe6", 00:16:40.595 "strip_size_kb": 64, 00:16:40.595 "state": "online", 00:16:40.595 "raid_level": "raid5f", 00:16:40.595 "superblock": true, 00:16:40.595 "num_base_bdevs": 3, 00:16:40.595 "num_base_bdevs_discovered": 3, 00:16:40.595 "num_base_bdevs_operational": 3, 00:16:40.595 "base_bdevs_list": [ 00:16:40.595 { 00:16:40.595 "name": "pt1", 00:16:40.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:40.595 "is_configured": true, 00:16:40.595 "data_offset": 2048, 00:16:40.595 "data_size": 63488 00:16:40.595 }, 00:16:40.595 { 00:16:40.595 "name": "pt2", 00:16:40.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.595 "is_configured": true, 00:16:40.595 "data_offset": 2048, 00:16:40.595 "data_size": 63488 00:16:40.595 }, 00:16:40.595 { 00:16:40.595 "name": "pt3", 00:16:40.595 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:40.595 "is_configured": true, 00:16:40.595 "data_offset": 2048, 00:16:40.595 "data_size": 63488 00:16:40.595 } 00:16:40.595 ] 00:16:40.595 }' 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.595 11:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.164 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:41.164 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:41.164 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:41.164 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:41.164 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:41.164 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:41.164 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:41.164 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.164 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.164 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:41.164 [2024-11-05 11:32:40.195535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.164 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.164 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:41.164 "name": "raid_bdev1", 00:16:41.164 "aliases": [ 00:16:41.164 "efdd0159-0815-43de-b170-a8c8aa598fe6" 00:16:41.164 ], 00:16:41.164 "product_name": "Raid Volume", 00:16:41.164 "block_size": 512, 00:16:41.164 "num_blocks": 126976, 00:16:41.164 "uuid": "efdd0159-0815-43de-b170-a8c8aa598fe6", 00:16:41.164 "assigned_rate_limits": { 00:16:41.164 "rw_ios_per_sec": 0, 00:16:41.164 "rw_mbytes_per_sec": 0, 00:16:41.164 "r_mbytes_per_sec": 0, 00:16:41.164 "w_mbytes_per_sec": 0 00:16:41.164 }, 00:16:41.164 "claimed": false, 00:16:41.164 "zoned": false, 00:16:41.165 "supported_io_types": { 00:16:41.165 "read": true, 00:16:41.165 "write": true, 00:16:41.165 "unmap": false, 00:16:41.165 "flush": false, 00:16:41.165 "reset": true, 00:16:41.165 "nvme_admin": false, 00:16:41.165 "nvme_io": false, 00:16:41.165 "nvme_io_md": false, 00:16:41.165 "write_zeroes": true, 00:16:41.165 "zcopy": false, 00:16:41.165 "get_zone_info": false, 00:16:41.165 "zone_management": false, 00:16:41.165 "zone_append": false, 00:16:41.165 "compare": false, 00:16:41.165 "compare_and_write": false, 00:16:41.165 "abort": false, 00:16:41.165 "seek_hole": false, 00:16:41.165 "seek_data": false, 00:16:41.165 "copy": false, 00:16:41.165 "nvme_iov_md": false 00:16:41.165 }, 00:16:41.165 "driver_specific": { 00:16:41.165 "raid": { 00:16:41.165 "uuid": "efdd0159-0815-43de-b170-a8c8aa598fe6", 00:16:41.165 "strip_size_kb": 64, 00:16:41.165 "state": "online", 00:16:41.165 "raid_level": "raid5f", 00:16:41.165 "superblock": true, 00:16:41.165 "num_base_bdevs": 3, 00:16:41.165 "num_base_bdevs_discovered": 3, 00:16:41.165 "num_base_bdevs_operational": 3, 00:16:41.165 "base_bdevs_list": [ 00:16:41.165 { 00:16:41.165 "name": "pt1", 00:16:41.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:41.165 "is_configured": true, 00:16:41.165 "data_offset": 2048, 00:16:41.165 "data_size": 63488 00:16:41.165 }, 00:16:41.165 { 00:16:41.165 "name": "pt2", 00:16:41.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.165 "is_configured": true, 00:16:41.165 "data_offset": 2048, 00:16:41.165 "data_size": 63488 00:16:41.165 }, 00:16:41.165 { 00:16:41.165 "name": "pt3", 00:16:41.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.165 "is_configured": true, 00:16:41.165 "data_offset": 2048, 00:16:41.165 "data_size": 63488 00:16:41.165 } 00:16:41.165 ] 00:16:41.165 } 00:16:41.165 } 00:16:41.165 }' 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:41.165 pt2 00:16:41.165 pt3' 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.165 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:41.425 [2024-11-05 11:32:40.451408] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' efdd0159-0815-43de-b170-a8c8aa598fe6 '!=' efdd0159-0815-43de-b170-a8c8aa598fe6 ']' 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.425 [2024-11-05 11:32:40.495262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.425 "name": "raid_bdev1", 00:16:41.425 "uuid": "efdd0159-0815-43de-b170-a8c8aa598fe6", 00:16:41.425 "strip_size_kb": 64, 00:16:41.425 "state": "online", 00:16:41.425 "raid_level": "raid5f", 00:16:41.425 "superblock": true, 00:16:41.425 "num_base_bdevs": 3, 00:16:41.425 "num_base_bdevs_discovered": 2, 00:16:41.425 "num_base_bdevs_operational": 2, 00:16:41.425 "base_bdevs_list": [ 00:16:41.425 { 00:16:41.425 "name": null, 00:16:41.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.425 "is_configured": false, 00:16:41.425 "data_offset": 0, 00:16:41.425 "data_size": 63488 00:16:41.425 }, 00:16:41.425 { 00:16:41.425 "name": "pt2", 00:16:41.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.425 "is_configured": true, 00:16:41.425 "data_offset": 2048, 00:16:41.425 "data_size": 63488 00:16:41.425 }, 00:16:41.425 { 00:16:41.425 "name": "pt3", 00:16:41.425 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.425 "is_configured": true, 00:16:41.425 "data_offset": 2048, 00:16:41.425 "data_size": 63488 00:16:41.425 } 00:16:41.425 ] 00:16:41.425 }' 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.425 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.684 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:41.684 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.684 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.684 [2024-11-05 11:32:40.950443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.684 [2024-11-05 11:32:40.950510] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.684 [2024-11-05 11:32:40.950608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.684 [2024-11-05 11:32:40.950677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.684 [2024-11-05 11:32:40.950714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:41.684 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.684 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:41.684 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.944 11:32:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.944 [2024-11-05 11:32:41.018295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:41.944 [2024-11-05 11:32:41.018343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.944 [2024-11-05 11:32:41.018373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:41.944 [2024-11-05 11:32:41.018383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.944 [2024-11-05 11:32:41.020509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.944 [2024-11-05 11:32:41.020549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:41.944 [2024-11-05 11:32:41.020617] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:41.944 [2024-11-05 11:32:41.020658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:41.944 pt2 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.944 "name": "raid_bdev1", 00:16:41.944 "uuid": "efdd0159-0815-43de-b170-a8c8aa598fe6", 00:16:41.944 "strip_size_kb": 64, 00:16:41.944 "state": "configuring", 00:16:41.944 "raid_level": "raid5f", 00:16:41.944 "superblock": true, 00:16:41.944 "num_base_bdevs": 3, 00:16:41.944 "num_base_bdevs_discovered": 1, 00:16:41.944 "num_base_bdevs_operational": 2, 00:16:41.944 "base_bdevs_list": [ 00:16:41.944 { 00:16:41.944 "name": null, 00:16:41.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.944 "is_configured": false, 00:16:41.944 "data_offset": 2048, 00:16:41.944 "data_size": 63488 00:16:41.944 }, 00:16:41.944 { 00:16:41.944 "name": "pt2", 00:16:41.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.944 "is_configured": true, 00:16:41.944 "data_offset": 2048, 00:16:41.944 "data_size": 63488 00:16:41.944 }, 00:16:41.944 { 00:16:41.944 "name": null, 00:16:41.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.944 "is_configured": false, 00:16:41.944 "data_offset": 2048, 00:16:41.944 "data_size": 63488 00:16:41.944 } 00:16:41.944 ] 00:16:41.944 }' 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.944 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.204 [2024-11-05 11:32:41.369784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:42.204 [2024-11-05 11:32:41.369891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.204 [2024-11-05 11:32:41.369928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:42.204 [2024-11-05 11:32:41.369957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.204 [2024-11-05 11:32:41.370362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.204 [2024-11-05 11:32:41.370430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:42.204 [2024-11-05 11:32:41.370519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:42.204 [2024-11-05 11:32:41.370577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:42.204 [2024-11-05 11:32:41.370731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:42.204 [2024-11-05 11:32:41.370769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:42.204 [2024-11-05 11:32:41.371010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:42.204 [2024-11-05 11:32:41.376573] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:42.204 [2024-11-05 11:32:41.376629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:42.204 [2024-11-05 11:32:41.376948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.204 pt3 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.204 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.204 "name": "raid_bdev1", 00:16:42.204 "uuid": "efdd0159-0815-43de-b170-a8c8aa598fe6", 00:16:42.204 "strip_size_kb": 64, 00:16:42.205 "state": "online", 00:16:42.205 "raid_level": "raid5f", 00:16:42.205 "superblock": true, 00:16:42.205 "num_base_bdevs": 3, 00:16:42.205 "num_base_bdevs_discovered": 2, 00:16:42.205 "num_base_bdevs_operational": 2, 00:16:42.205 "base_bdevs_list": [ 00:16:42.205 { 00:16:42.205 "name": null, 00:16:42.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.205 "is_configured": false, 00:16:42.205 "data_offset": 2048, 00:16:42.205 "data_size": 63488 00:16:42.205 }, 00:16:42.205 { 00:16:42.205 "name": "pt2", 00:16:42.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.205 "is_configured": true, 00:16:42.205 "data_offset": 2048, 00:16:42.205 "data_size": 63488 00:16:42.205 }, 00:16:42.205 { 00:16:42.205 "name": "pt3", 00:16:42.205 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.205 "is_configured": true, 00:16:42.205 "data_offset": 2048, 00:16:42.205 "data_size": 63488 00:16:42.205 } 00:16:42.205 ] 00:16:42.205 }' 00:16:42.205 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.205 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.813 [2024-11-05 11:32:41.807061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.813 [2024-11-05 11:32:41.807172] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.813 [2024-11-05 11:32:41.807238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.813 [2024-11-05 11:32:41.807315] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.813 [2024-11-05 11:32:41.807325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.813 [2024-11-05 11:32:41.882953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:42.813 [2024-11-05 11:32:41.883056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.813 [2024-11-05 11:32:41.883090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:42.813 [2024-11-05 11:32:41.883126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.813 [2024-11-05 11:32:41.885501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.813 [2024-11-05 11:32:41.885585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:42.813 [2024-11-05 11:32:41.885663] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:42.813 [2024-11-05 11:32:41.885706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:42.813 [2024-11-05 11:32:41.885825] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:42.813 [2024-11-05 11:32:41.885835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.813 [2024-11-05 11:32:41.885849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:42.813 [2024-11-05 11:32:41.885916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:42.813 pt1 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.813 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.814 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.814 "name": "raid_bdev1", 00:16:42.814 "uuid": "efdd0159-0815-43de-b170-a8c8aa598fe6", 00:16:42.814 "strip_size_kb": 64, 00:16:42.814 "state": "configuring", 00:16:42.814 "raid_level": "raid5f", 00:16:42.814 "superblock": true, 00:16:42.814 "num_base_bdevs": 3, 00:16:42.814 "num_base_bdevs_discovered": 1, 00:16:42.814 "num_base_bdevs_operational": 2, 00:16:42.814 "base_bdevs_list": [ 00:16:42.814 { 00:16:42.814 "name": null, 00:16:42.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.814 "is_configured": false, 00:16:42.814 "data_offset": 2048, 00:16:42.814 "data_size": 63488 00:16:42.814 }, 00:16:42.814 { 00:16:42.814 "name": "pt2", 00:16:42.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.814 "is_configured": true, 00:16:42.814 "data_offset": 2048, 00:16:42.814 "data_size": 63488 00:16:42.814 }, 00:16:42.814 { 00:16:42.814 "name": null, 00:16:42.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.814 "is_configured": false, 00:16:42.814 "data_offset": 2048, 00:16:42.814 "data_size": 63488 00:16:42.814 } 00:16:42.814 ] 00:16:42.814 }' 00:16:42.814 11:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.814 11:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.092 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:43.092 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:43.092 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.092 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.092 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.092 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:43.092 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:43.092 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.092 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.092 [2024-11-05 11:32:42.354171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:43.092 [2024-11-05 11:32:42.354275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.092 [2024-11-05 11:32:42.354313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:43.092 [2024-11-05 11:32:42.354341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.092 [2024-11-05 11:32:42.354848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.092 [2024-11-05 11:32:42.354918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:43.092 [2024-11-05 11:32:42.355049] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:43.092 [2024-11-05 11:32:42.355105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:43.092 [2024-11-05 11:32:42.355346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:43.092 [2024-11-05 11:32:42.355395] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:43.092 [2024-11-05 11:32:42.355707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:43.092 [2024-11-05 11:32:42.362067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:43.369 [2024-11-05 11:32:42.362150] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:43.369 [2024-11-05 11:32:42.362458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.369 pt3 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.369 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.369 "name": "raid_bdev1", 00:16:43.369 "uuid": "efdd0159-0815-43de-b170-a8c8aa598fe6", 00:16:43.369 "strip_size_kb": 64, 00:16:43.369 "state": "online", 00:16:43.369 "raid_level": "raid5f", 00:16:43.369 "superblock": true, 00:16:43.369 "num_base_bdevs": 3, 00:16:43.369 "num_base_bdevs_discovered": 2, 00:16:43.369 "num_base_bdevs_operational": 2, 00:16:43.369 "base_bdevs_list": [ 00:16:43.369 { 00:16:43.370 "name": null, 00:16:43.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.370 "is_configured": false, 00:16:43.370 "data_offset": 2048, 00:16:43.370 "data_size": 63488 00:16:43.370 }, 00:16:43.370 { 00:16:43.370 "name": "pt2", 00:16:43.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.370 "is_configured": true, 00:16:43.370 "data_offset": 2048, 00:16:43.370 "data_size": 63488 00:16:43.370 }, 00:16:43.370 { 00:16:43.370 "name": "pt3", 00:16:43.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:43.370 "is_configured": true, 00:16:43.370 "data_offset": 2048, 00:16:43.370 "data_size": 63488 00:16:43.370 } 00:16:43.370 ] 00:16:43.370 }' 00:16:43.370 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.370 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.629 [2024-11-05 11:32:42.853282] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' efdd0159-0815-43de-b170-a8c8aa598fe6 '!=' efdd0159-0815-43de-b170-a8c8aa598fe6 ']' 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81171 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81171 ']' 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81171 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:43.629 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81171 00:16:43.888 killing process with pid 81171 00:16:43.888 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:43.888 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:43.888 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81171' 00:16:43.888 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81171 00:16:43.888 [2024-11-05 11:32:42.917582] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.888 [2024-11-05 11:32:42.917686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.889 11:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81171 00:16:43.889 [2024-11-05 11:32:42.917755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.889 [2024-11-05 11:32:42.917769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:44.147 [2024-11-05 11:32:43.240291] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:45.084 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:45.084 00:16:45.084 real 0m7.654s 00:16:45.084 user 0m11.905s 00:16:45.084 sys 0m1.375s 00:16:45.084 11:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:45.084 ************************************ 00:16:45.084 END TEST raid5f_superblock_test 00:16:45.084 ************************************ 00:16:45.084 11:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.344 11:32:44 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:45.344 11:32:44 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:45.344 11:32:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:45.344 11:32:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:45.344 11:32:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.344 ************************************ 00:16:45.344 START TEST raid5f_rebuild_test 00:16:45.344 ************************************ 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:45.344 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:45.345 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:45.345 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:45.345 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:45.345 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:45.345 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:45.345 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:45.345 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81609 00:16:45.345 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:45.345 11:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81609 00:16:45.345 11:32:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 81609 ']' 00:16:45.345 11:32:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.345 11:32:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:45.345 11:32:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.345 11:32:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:45.345 11:32:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.345 [2024-11-05 11:32:44.511009] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:16:45.345 [2024-11-05 11:32:44.511581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81609 ] 00:16:45.345 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:45.345 Zero copy mechanism will not be used. 00:16:45.604 [2024-11-05 11:32:44.681450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.605 [2024-11-05 11:32:44.786614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.864 [2024-11-05 11:32:44.970825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.864 [2024-11-05 11:32:44.970942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.124 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:46.124 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:16:46.124 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:46.124 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:46.124 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.124 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.124 BaseBdev1_malloc 00:16:46.124 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.124 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:46.124 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.124 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.124 [2024-11-05 11:32:45.370805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:46.124 [2024-11-05 11:32:45.370925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.124 [2024-11-05 11:32:45.370966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:46.124 [2024-11-05 11:32:45.370996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.124 [2024-11-05 11:32:45.373087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.124 [2024-11-05 11:32:45.373191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:46.124 BaseBdev1 00:16:46.124 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.124 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:46.124 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:46.124 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.124 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.383 BaseBdev2_malloc 00:16:46.383 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.384 [2024-11-05 11:32:45.424643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:46.384 [2024-11-05 11:32:45.424750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.384 [2024-11-05 11:32:45.424784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:46.384 [2024-11-05 11:32:45.424814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.384 [2024-11-05 11:32:45.426769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.384 [2024-11-05 11:32:45.426838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:46.384 BaseBdev2 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.384 BaseBdev3_malloc 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.384 [2024-11-05 11:32:45.506394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:46.384 [2024-11-05 11:32:45.506439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.384 [2024-11-05 11:32:45.506458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:46.384 [2024-11-05 11:32:45.506468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.384 [2024-11-05 11:32:45.508418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.384 [2024-11-05 11:32:45.508458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:46.384 BaseBdev3 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.384 spare_malloc 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.384 spare_delay 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.384 [2024-11-05 11:32:45.571726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:46.384 [2024-11-05 11:32:45.571773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.384 [2024-11-05 11:32:45.571789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:46.384 [2024-11-05 11:32:45.571799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.384 [2024-11-05 11:32:45.573794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.384 [2024-11-05 11:32:45.573890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:46.384 spare 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.384 [2024-11-05 11:32:45.583765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.384 [2024-11-05 11:32:45.585505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.384 [2024-11-05 11:32:45.585565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.384 [2024-11-05 11:32:45.585644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:46.384 [2024-11-05 11:32:45.585654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:46.384 [2024-11-05 11:32:45.585885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:46.384 [2024-11-05 11:32:45.591286] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:46.384 [2024-11-05 11:32:45.591355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:46.384 [2024-11-05 11:32:45.591557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.384 "name": "raid_bdev1", 00:16:46.384 "uuid": "d9193859-6242-4fcf-b9a7-4d3471c5218c", 00:16:46.384 "strip_size_kb": 64, 00:16:46.384 "state": "online", 00:16:46.384 "raid_level": "raid5f", 00:16:46.384 "superblock": false, 00:16:46.384 "num_base_bdevs": 3, 00:16:46.384 "num_base_bdevs_discovered": 3, 00:16:46.384 "num_base_bdevs_operational": 3, 00:16:46.384 "base_bdevs_list": [ 00:16:46.384 { 00:16:46.384 "name": "BaseBdev1", 00:16:46.384 "uuid": "f73434b9-fba0-57c1-815d-19cc6aa1e6d1", 00:16:46.384 "is_configured": true, 00:16:46.384 "data_offset": 0, 00:16:46.384 "data_size": 65536 00:16:46.384 }, 00:16:46.384 { 00:16:46.384 "name": "BaseBdev2", 00:16:46.384 "uuid": "42496cbf-d684-5d6e-b73f-4a5b9baaf8ee", 00:16:46.384 "is_configured": true, 00:16:46.384 "data_offset": 0, 00:16:46.384 "data_size": 65536 00:16:46.384 }, 00:16:46.384 { 00:16:46.384 "name": "BaseBdev3", 00:16:46.384 "uuid": "e6eb239e-0294-5d52-b882-a98cb34a25f4", 00:16:46.384 "is_configured": true, 00:16:46.384 "data_offset": 0, 00:16:46.384 "data_size": 65536 00:16:46.384 } 00:16:46.384 ] 00:16:46.384 }' 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.384 11:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:46.953 [2024-11-05 11:32:46.013360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:46.953 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:47.213 [2024-11-05 11:32:46.276787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:47.213 /dev/nbd0 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:47.213 1+0 records in 00:16:47.213 1+0 records out 00:16:47.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052901 s, 7.7 MB/s 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:47.213 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:47.473 512+0 records in 00:16:47.473 512+0 records out 00:16:47.473 67108864 bytes (67 MB, 64 MiB) copied, 0.362298 s, 185 MB/s 00:16:47.473 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:47.473 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:47.473 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:47.473 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:47.473 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:47.473 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:47.473 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:47.732 [2024-11-05 11:32:46.922271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 [2024-11-05 11:32:46.934382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.732 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.732 "name": "raid_bdev1", 00:16:47.732 "uuid": "d9193859-6242-4fcf-b9a7-4d3471c5218c", 00:16:47.733 "strip_size_kb": 64, 00:16:47.733 "state": "online", 00:16:47.733 "raid_level": "raid5f", 00:16:47.733 "superblock": false, 00:16:47.733 "num_base_bdevs": 3, 00:16:47.733 "num_base_bdevs_discovered": 2, 00:16:47.733 "num_base_bdevs_operational": 2, 00:16:47.733 "base_bdevs_list": [ 00:16:47.733 { 00:16:47.733 "name": null, 00:16:47.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.733 "is_configured": false, 00:16:47.733 "data_offset": 0, 00:16:47.733 "data_size": 65536 00:16:47.733 }, 00:16:47.733 { 00:16:47.733 "name": "BaseBdev2", 00:16:47.733 "uuid": "42496cbf-d684-5d6e-b73f-4a5b9baaf8ee", 00:16:47.733 "is_configured": true, 00:16:47.733 "data_offset": 0, 00:16:47.733 "data_size": 65536 00:16:47.733 }, 00:16:47.733 { 00:16:47.733 "name": "BaseBdev3", 00:16:47.733 "uuid": "e6eb239e-0294-5d52-b882-a98cb34a25f4", 00:16:47.733 "is_configured": true, 00:16:47.733 "data_offset": 0, 00:16:47.733 "data_size": 65536 00:16:47.733 } 00:16:47.733 ] 00:16:47.733 }' 00:16:47.733 11:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.733 11:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.301 11:32:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:48.301 11:32:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.301 11:32:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.301 [2024-11-05 11:32:47.329654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.301 [2024-11-05 11:32:47.345667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:48.301 11:32:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.301 11:32:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:48.301 [2024-11-05 11:32:47.352682] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.239 "name": "raid_bdev1", 00:16:49.239 "uuid": "d9193859-6242-4fcf-b9a7-4d3471c5218c", 00:16:49.239 "strip_size_kb": 64, 00:16:49.239 "state": "online", 00:16:49.239 "raid_level": "raid5f", 00:16:49.239 "superblock": false, 00:16:49.239 "num_base_bdevs": 3, 00:16:49.239 "num_base_bdevs_discovered": 3, 00:16:49.239 "num_base_bdevs_operational": 3, 00:16:49.239 "process": { 00:16:49.239 "type": "rebuild", 00:16:49.239 "target": "spare", 00:16:49.239 "progress": { 00:16:49.239 "blocks": 20480, 00:16:49.239 "percent": 15 00:16:49.239 } 00:16:49.239 }, 00:16:49.239 "base_bdevs_list": [ 00:16:49.239 { 00:16:49.239 "name": "spare", 00:16:49.239 "uuid": "8fbd994c-8d4f-5eea-8249-6d7b3406baa4", 00:16:49.239 "is_configured": true, 00:16:49.239 "data_offset": 0, 00:16:49.239 "data_size": 65536 00:16:49.239 }, 00:16:49.239 { 00:16:49.239 "name": "BaseBdev2", 00:16:49.239 "uuid": "42496cbf-d684-5d6e-b73f-4a5b9baaf8ee", 00:16:49.239 "is_configured": true, 00:16:49.239 "data_offset": 0, 00:16:49.239 "data_size": 65536 00:16:49.239 }, 00:16:49.239 { 00:16:49.239 "name": "BaseBdev3", 00:16:49.239 "uuid": "e6eb239e-0294-5d52-b882-a98cb34a25f4", 00:16:49.239 "is_configured": true, 00:16:49.239 "data_offset": 0, 00:16:49.239 "data_size": 65536 00:16:49.239 } 00:16:49.239 ] 00:16:49.239 }' 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.239 11:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.239 [2024-11-05 11:32:48.483764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.498 [2024-11-05 11:32:48.559850] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:49.499 [2024-11-05 11:32:48.559901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.499 [2024-11-05 11:32:48.559918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.499 [2024-11-05 11:32:48.559926] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.499 "name": "raid_bdev1", 00:16:49.499 "uuid": "d9193859-6242-4fcf-b9a7-4d3471c5218c", 00:16:49.499 "strip_size_kb": 64, 00:16:49.499 "state": "online", 00:16:49.499 "raid_level": "raid5f", 00:16:49.499 "superblock": false, 00:16:49.499 "num_base_bdevs": 3, 00:16:49.499 "num_base_bdevs_discovered": 2, 00:16:49.499 "num_base_bdevs_operational": 2, 00:16:49.499 "base_bdevs_list": [ 00:16:49.499 { 00:16:49.499 "name": null, 00:16:49.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.499 "is_configured": false, 00:16:49.499 "data_offset": 0, 00:16:49.499 "data_size": 65536 00:16:49.499 }, 00:16:49.499 { 00:16:49.499 "name": "BaseBdev2", 00:16:49.499 "uuid": "42496cbf-d684-5d6e-b73f-4a5b9baaf8ee", 00:16:49.499 "is_configured": true, 00:16:49.499 "data_offset": 0, 00:16:49.499 "data_size": 65536 00:16:49.499 }, 00:16:49.499 { 00:16:49.499 "name": "BaseBdev3", 00:16:49.499 "uuid": "e6eb239e-0294-5d52-b882-a98cb34a25f4", 00:16:49.499 "is_configured": true, 00:16:49.499 "data_offset": 0, 00:16:49.499 "data_size": 65536 00:16:49.499 } 00:16:49.499 ] 00:16:49.499 }' 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.499 11:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.068 11:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:50.068 11:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.068 11:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:50.068 11:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:50.068 11:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.068 11:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.068 11:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.068 11:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.068 11:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.068 11:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.068 11:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.068 "name": "raid_bdev1", 00:16:50.068 "uuid": "d9193859-6242-4fcf-b9a7-4d3471c5218c", 00:16:50.068 "strip_size_kb": 64, 00:16:50.068 "state": "online", 00:16:50.068 "raid_level": "raid5f", 00:16:50.068 "superblock": false, 00:16:50.068 "num_base_bdevs": 3, 00:16:50.068 "num_base_bdevs_discovered": 2, 00:16:50.068 "num_base_bdevs_operational": 2, 00:16:50.068 "base_bdevs_list": [ 00:16:50.068 { 00:16:50.068 "name": null, 00:16:50.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.069 "is_configured": false, 00:16:50.069 "data_offset": 0, 00:16:50.069 "data_size": 65536 00:16:50.069 }, 00:16:50.069 { 00:16:50.069 "name": "BaseBdev2", 00:16:50.069 "uuid": "42496cbf-d684-5d6e-b73f-4a5b9baaf8ee", 00:16:50.069 "is_configured": true, 00:16:50.069 "data_offset": 0, 00:16:50.069 "data_size": 65536 00:16:50.069 }, 00:16:50.069 { 00:16:50.069 "name": "BaseBdev3", 00:16:50.069 "uuid": "e6eb239e-0294-5d52-b882-a98cb34a25f4", 00:16:50.069 "is_configured": true, 00:16:50.069 "data_offset": 0, 00:16:50.069 "data_size": 65536 00:16:50.069 } 00:16:50.069 ] 00:16:50.069 }' 00:16:50.069 11:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.069 11:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:50.069 11:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.069 11:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:50.069 11:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:50.069 11:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.069 11:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.069 [2024-11-05 11:32:49.216667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:50.069 [2024-11-05 11:32:49.231734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:50.069 11:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.069 11:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:50.069 [2024-11-05 11:32:49.238921] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:51.006 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.006 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.006 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.006 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.006 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.006 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.006 11:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.006 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.006 11:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.006 11:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.266 "name": "raid_bdev1", 00:16:51.266 "uuid": "d9193859-6242-4fcf-b9a7-4d3471c5218c", 00:16:51.266 "strip_size_kb": 64, 00:16:51.266 "state": "online", 00:16:51.266 "raid_level": "raid5f", 00:16:51.266 "superblock": false, 00:16:51.266 "num_base_bdevs": 3, 00:16:51.266 "num_base_bdevs_discovered": 3, 00:16:51.266 "num_base_bdevs_operational": 3, 00:16:51.266 "process": { 00:16:51.266 "type": "rebuild", 00:16:51.266 "target": "spare", 00:16:51.266 "progress": { 00:16:51.266 "blocks": 20480, 00:16:51.266 "percent": 15 00:16:51.266 } 00:16:51.266 }, 00:16:51.266 "base_bdevs_list": [ 00:16:51.266 { 00:16:51.266 "name": "spare", 00:16:51.266 "uuid": "8fbd994c-8d4f-5eea-8249-6d7b3406baa4", 00:16:51.266 "is_configured": true, 00:16:51.266 "data_offset": 0, 00:16:51.266 "data_size": 65536 00:16:51.266 }, 00:16:51.266 { 00:16:51.266 "name": "BaseBdev2", 00:16:51.266 "uuid": "42496cbf-d684-5d6e-b73f-4a5b9baaf8ee", 00:16:51.266 "is_configured": true, 00:16:51.266 "data_offset": 0, 00:16:51.266 "data_size": 65536 00:16:51.266 }, 00:16:51.266 { 00:16:51.266 "name": "BaseBdev3", 00:16:51.266 "uuid": "e6eb239e-0294-5d52-b882-a98cb34a25f4", 00:16:51.266 "is_configured": true, 00:16:51.266 "data_offset": 0, 00:16:51.266 "data_size": 65536 00:16:51.266 } 00:16:51.266 ] 00:16:51.266 }' 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=540 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.266 "name": "raid_bdev1", 00:16:51.266 "uuid": "d9193859-6242-4fcf-b9a7-4d3471c5218c", 00:16:51.266 "strip_size_kb": 64, 00:16:51.266 "state": "online", 00:16:51.266 "raid_level": "raid5f", 00:16:51.266 "superblock": false, 00:16:51.266 "num_base_bdevs": 3, 00:16:51.266 "num_base_bdevs_discovered": 3, 00:16:51.266 "num_base_bdevs_operational": 3, 00:16:51.266 "process": { 00:16:51.266 "type": "rebuild", 00:16:51.266 "target": "spare", 00:16:51.266 "progress": { 00:16:51.266 "blocks": 22528, 00:16:51.266 "percent": 17 00:16:51.266 } 00:16:51.266 }, 00:16:51.266 "base_bdevs_list": [ 00:16:51.266 { 00:16:51.266 "name": "spare", 00:16:51.266 "uuid": "8fbd994c-8d4f-5eea-8249-6d7b3406baa4", 00:16:51.266 "is_configured": true, 00:16:51.266 "data_offset": 0, 00:16:51.266 "data_size": 65536 00:16:51.266 }, 00:16:51.266 { 00:16:51.266 "name": "BaseBdev2", 00:16:51.266 "uuid": "42496cbf-d684-5d6e-b73f-4a5b9baaf8ee", 00:16:51.266 "is_configured": true, 00:16:51.266 "data_offset": 0, 00:16:51.266 "data_size": 65536 00:16:51.266 }, 00:16:51.266 { 00:16:51.266 "name": "BaseBdev3", 00:16:51.266 "uuid": "e6eb239e-0294-5d52-b882-a98cb34a25f4", 00:16:51.266 "is_configured": true, 00:16:51.266 "data_offset": 0, 00:16:51.266 "data_size": 65536 00:16:51.266 } 00:16:51.266 ] 00:16:51.266 }' 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.266 11:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.646 11:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.646 11:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.646 11:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.646 11:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.646 11:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.646 11:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.646 11:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.646 11:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.646 11:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.646 11:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.646 11:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.646 11:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.646 "name": "raid_bdev1", 00:16:52.646 "uuid": "d9193859-6242-4fcf-b9a7-4d3471c5218c", 00:16:52.646 "strip_size_kb": 64, 00:16:52.646 "state": "online", 00:16:52.646 "raid_level": "raid5f", 00:16:52.646 "superblock": false, 00:16:52.646 "num_base_bdevs": 3, 00:16:52.646 "num_base_bdevs_discovered": 3, 00:16:52.646 "num_base_bdevs_operational": 3, 00:16:52.646 "process": { 00:16:52.646 "type": "rebuild", 00:16:52.646 "target": "spare", 00:16:52.646 "progress": { 00:16:52.646 "blocks": 45056, 00:16:52.646 "percent": 34 00:16:52.646 } 00:16:52.646 }, 00:16:52.646 "base_bdevs_list": [ 00:16:52.646 { 00:16:52.646 "name": "spare", 00:16:52.646 "uuid": "8fbd994c-8d4f-5eea-8249-6d7b3406baa4", 00:16:52.646 "is_configured": true, 00:16:52.646 "data_offset": 0, 00:16:52.646 "data_size": 65536 00:16:52.646 }, 00:16:52.647 { 00:16:52.647 "name": "BaseBdev2", 00:16:52.647 "uuid": "42496cbf-d684-5d6e-b73f-4a5b9baaf8ee", 00:16:52.647 "is_configured": true, 00:16:52.647 "data_offset": 0, 00:16:52.647 "data_size": 65536 00:16:52.647 }, 00:16:52.647 { 00:16:52.647 "name": "BaseBdev3", 00:16:52.647 "uuid": "e6eb239e-0294-5d52-b882-a98cb34a25f4", 00:16:52.647 "is_configured": true, 00:16:52.647 "data_offset": 0, 00:16:52.647 "data_size": 65536 00:16:52.647 } 00:16:52.647 ] 00:16:52.647 }' 00:16:52.647 11:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.647 11:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.647 11:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.647 11:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.647 11:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:53.585 11:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.585 11:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.585 11:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.585 11:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.585 11:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.585 11:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.585 11:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.585 11:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.585 11:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.585 11:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.585 11:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.585 11:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.585 "name": "raid_bdev1", 00:16:53.585 "uuid": "d9193859-6242-4fcf-b9a7-4d3471c5218c", 00:16:53.585 "strip_size_kb": 64, 00:16:53.585 "state": "online", 00:16:53.585 "raid_level": "raid5f", 00:16:53.585 "superblock": false, 00:16:53.585 "num_base_bdevs": 3, 00:16:53.585 "num_base_bdevs_discovered": 3, 00:16:53.585 "num_base_bdevs_operational": 3, 00:16:53.585 "process": { 00:16:53.585 "type": "rebuild", 00:16:53.585 "target": "spare", 00:16:53.585 "progress": { 00:16:53.586 "blocks": 69632, 00:16:53.586 "percent": 53 00:16:53.586 } 00:16:53.586 }, 00:16:53.586 "base_bdevs_list": [ 00:16:53.586 { 00:16:53.586 "name": "spare", 00:16:53.586 "uuid": "8fbd994c-8d4f-5eea-8249-6d7b3406baa4", 00:16:53.586 "is_configured": true, 00:16:53.586 "data_offset": 0, 00:16:53.586 "data_size": 65536 00:16:53.586 }, 00:16:53.586 { 00:16:53.586 "name": "BaseBdev2", 00:16:53.586 "uuid": "42496cbf-d684-5d6e-b73f-4a5b9baaf8ee", 00:16:53.586 "is_configured": true, 00:16:53.586 "data_offset": 0, 00:16:53.586 "data_size": 65536 00:16:53.586 }, 00:16:53.586 { 00:16:53.586 "name": "BaseBdev3", 00:16:53.586 "uuid": "e6eb239e-0294-5d52-b882-a98cb34a25f4", 00:16:53.586 "is_configured": true, 00:16:53.586 "data_offset": 0, 00:16:53.586 "data_size": 65536 00:16:53.586 } 00:16:53.586 ] 00:16:53.586 }' 00:16:53.586 11:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.586 11:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.586 11:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.586 11:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.586 11:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:54.966 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.966 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.966 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.966 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.966 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.966 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.966 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.966 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.966 11:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.966 11:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.966 11:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.966 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.966 "name": "raid_bdev1", 00:16:54.966 "uuid": "d9193859-6242-4fcf-b9a7-4d3471c5218c", 00:16:54.966 "strip_size_kb": 64, 00:16:54.966 "state": "online", 00:16:54.966 "raid_level": "raid5f", 00:16:54.966 "superblock": false, 00:16:54.966 "num_base_bdevs": 3, 00:16:54.966 "num_base_bdevs_discovered": 3, 00:16:54.966 "num_base_bdevs_operational": 3, 00:16:54.966 "process": { 00:16:54.966 "type": "rebuild", 00:16:54.966 "target": "spare", 00:16:54.966 "progress": { 00:16:54.966 "blocks": 92160, 00:16:54.966 "percent": 70 00:16:54.966 } 00:16:54.966 }, 00:16:54.966 "base_bdevs_list": [ 00:16:54.966 { 00:16:54.966 "name": "spare", 00:16:54.966 "uuid": "8fbd994c-8d4f-5eea-8249-6d7b3406baa4", 00:16:54.967 "is_configured": true, 00:16:54.967 "data_offset": 0, 00:16:54.967 "data_size": 65536 00:16:54.967 }, 00:16:54.967 { 00:16:54.967 "name": "BaseBdev2", 00:16:54.967 "uuid": "42496cbf-d684-5d6e-b73f-4a5b9baaf8ee", 00:16:54.967 "is_configured": true, 00:16:54.967 "data_offset": 0, 00:16:54.967 "data_size": 65536 00:16:54.967 }, 00:16:54.967 { 00:16:54.967 "name": "BaseBdev3", 00:16:54.967 "uuid": "e6eb239e-0294-5d52-b882-a98cb34a25f4", 00:16:54.967 "is_configured": true, 00:16:54.967 "data_offset": 0, 00:16:54.967 "data_size": 65536 00:16:54.967 } 00:16:54.967 ] 00:16:54.967 }' 00:16:54.967 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.967 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.967 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.967 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.967 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.906 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.906 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.906 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.906 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.906 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.906 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.906 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.906 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.906 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.906 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.906 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.906 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.906 "name": "raid_bdev1", 00:16:55.906 "uuid": "d9193859-6242-4fcf-b9a7-4d3471c5218c", 00:16:55.906 "strip_size_kb": 64, 00:16:55.906 "state": "online", 00:16:55.906 "raid_level": "raid5f", 00:16:55.906 "superblock": false, 00:16:55.906 "num_base_bdevs": 3, 00:16:55.906 "num_base_bdevs_discovered": 3, 00:16:55.906 "num_base_bdevs_operational": 3, 00:16:55.906 "process": { 00:16:55.906 "type": "rebuild", 00:16:55.906 "target": "spare", 00:16:55.906 "progress": { 00:16:55.906 "blocks": 116736, 00:16:55.906 "percent": 89 00:16:55.906 } 00:16:55.906 }, 00:16:55.906 "base_bdevs_list": [ 00:16:55.906 { 00:16:55.906 "name": "spare", 00:16:55.906 "uuid": "8fbd994c-8d4f-5eea-8249-6d7b3406baa4", 00:16:55.906 "is_configured": true, 00:16:55.906 "data_offset": 0, 00:16:55.906 "data_size": 65536 00:16:55.906 }, 00:16:55.906 { 00:16:55.906 "name": "BaseBdev2", 00:16:55.906 "uuid": "42496cbf-d684-5d6e-b73f-4a5b9baaf8ee", 00:16:55.906 "is_configured": true, 00:16:55.906 "data_offset": 0, 00:16:55.906 "data_size": 65536 00:16:55.906 }, 00:16:55.906 { 00:16:55.906 "name": "BaseBdev3", 00:16:55.906 "uuid": "e6eb239e-0294-5d52-b882-a98cb34a25f4", 00:16:55.906 "is_configured": true, 00:16:55.906 "data_offset": 0, 00:16:55.906 "data_size": 65536 00:16:55.906 } 00:16:55.906 ] 00:16:55.906 }' 00:16:55.906 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.906 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.906 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.906 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.906 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.507 [2024-11-05 11:32:55.673114] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:56.507 [2024-11-05 11:32:55.673193] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:56.507 [2024-11-05 11:32:55.673232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.077 "name": "raid_bdev1", 00:16:57.077 "uuid": "d9193859-6242-4fcf-b9a7-4d3471c5218c", 00:16:57.077 "strip_size_kb": 64, 00:16:57.077 "state": "online", 00:16:57.077 "raid_level": "raid5f", 00:16:57.077 "superblock": false, 00:16:57.077 "num_base_bdevs": 3, 00:16:57.077 "num_base_bdevs_discovered": 3, 00:16:57.077 "num_base_bdevs_operational": 3, 00:16:57.077 "base_bdevs_list": [ 00:16:57.077 { 00:16:57.077 "name": "spare", 00:16:57.077 "uuid": "8fbd994c-8d4f-5eea-8249-6d7b3406baa4", 00:16:57.077 "is_configured": true, 00:16:57.077 "data_offset": 0, 00:16:57.077 "data_size": 65536 00:16:57.077 }, 00:16:57.077 { 00:16:57.077 "name": "BaseBdev2", 00:16:57.077 "uuid": "42496cbf-d684-5d6e-b73f-4a5b9baaf8ee", 00:16:57.077 "is_configured": true, 00:16:57.077 "data_offset": 0, 00:16:57.077 "data_size": 65536 00:16:57.077 }, 00:16:57.077 { 00:16:57.077 "name": "BaseBdev3", 00:16:57.077 "uuid": "e6eb239e-0294-5d52-b882-a98cb34a25f4", 00:16:57.077 "is_configured": true, 00:16:57.077 "data_offset": 0, 00:16:57.077 "data_size": 65536 00:16:57.077 } 00:16:57.077 ] 00:16:57.077 }' 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.077 "name": "raid_bdev1", 00:16:57.077 "uuid": "d9193859-6242-4fcf-b9a7-4d3471c5218c", 00:16:57.077 "strip_size_kb": 64, 00:16:57.077 "state": "online", 00:16:57.077 "raid_level": "raid5f", 00:16:57.077 "superblock": false, 00:16:57.077 "num_base_bdevs": 3, 00:16:57.077 "num_base_bdevs_discovered": 3, 00:16:57.077 "num_base_bdevs_operational": 3, 00:16:57.077 "base_bdevs_list": [ 00:16:57.077 { 00:16:57.077 "name": "spare", 00:16:57.077 "uuid": "8fbd994c-8d4f-5eea-8249-6d7b3406baa4", 00:16:57.077 "is_configured": true, 00:16:57.077 "data_offset": 0, 00:16:57.077 "data_size": 65536 00:16:57.077 }, 00:16:57.077 { 00:16:57.077 "name": "BaseBdev2", 00:16:57.077 "uuid": "42496cbf-d684-5d6e-b73f-4a5b9baaf8ee", 00:16:57.077 "is_configured": true, 00:16:57.077 "data_offset": 0, 00:16:57.077 "data_size": 65536 00:16:57.077 }, 00:16:57.077 { 00:16:57.077 "name": "BaseBdev3", 00:16:57.077 "uuid": "e6eb239e-0294-5d52-b882-a98cb34a25f4", 00:16:57.077 "is_configured": true, 00:16:57.077 "data_offset": 0, 00:16:57.077 "data_size": 65536 00:16:57.077 } 00:16:57.077 ] 00:16:57.077 }' 00:16:57.077 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.337 "name": "raid_bdev1", 00:16:57.337 "uuid": "d9193859-6242-4fcf-b9a7-4d3471c5218c", 00:16:57.337 "strip_size_kb": 64, 00:16:57.337 "state": "online", 00:16:57.337 "raid_level": "raid5f", 00:16:57.337 "superblock": false, 00:16:57.337 "num_base_bdevs": 3, 00:16:57.337 "num_base_bdevs_discovered": 3, 00:16:57.337 "num_base_bdevs_operational": 3, 00:16:57.337 "base_bdevs_list": [ 00:16:57.337 { 00:16:57.337 "name": "spare", 00:16:57.337 "uuid": "8fbd994c-8d4f-5eea-8249-6d7b3406baa4", 00:16:57.337 "is_configured": true, 00:16:57.337 "data_offset": 0, 00:16:57.337 "data_size": 65536 00:16:57.337 }, 00:16:57.337 { 00:16:57.337 "name": "BaseBdev2", 00:16:57.337 "uuid": "42496cbf-d684-5d6e-b73f-4a5b9baaf8ee", 00:16:57.337 "is_configured": true, 00:16:57.337 "data_offset": 0, 00:16:57.337 "data_size": 65536 00:16:57.337 }, 00:16:57.337 { 00:16:57.337 "name": "BaseBdev3", 00:16:57.337 "uuid": "e6eb239e-0294-5d52-b882-a98cb34a25f4", 00:16:57.337 "is_configured": true, 00:16:57.337 "data_offset": 0, 00:16:57.337 "data_size": 65536 00:16:57.337 } 00:16:57.337 ] 00:16:57.337 }' 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.337 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.597 [2024-11-05 11:32:56.804209] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.597 [2024-11-05 11:32:56.804284] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.597 [2024-11-05 11:32:56.804386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.597 [2024-11-05 11:32:56.804480] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.597 [2024-11-05 11:32:56.804563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:57.597 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:57.857 /dev/nbd0 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:57.857 1+0 records in 00:16:57.857 1+0 records out 00:16:57.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245833 s, 16.7 MB/s 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:57.857 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:58.117 /dev/nbd1 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.117 1+0 records in 00:16:58.117 1+0 records out 00:16:58.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447346 s, 9.2 MB/s 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:58.117 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:58.377 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:58.377 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:58.377 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:58.377 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:58.377 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:58.377 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:58.377 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:58.637 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:58.637 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:58.637 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:58.637 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:58.637 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:58.637 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:58.637 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:58.637 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:58.637 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:58.637 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81609 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 81609 ']' 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 81609 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81609 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81609' 00:16:58.897 killing process with pid 81609 00:16:58.897 Received shutdown signal, test time was about 60.000000 seconds 00:16:58.897 00:16:58.897 Latency(us) 00:16:58.897 [2024-11-05T11:32:58.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.897 [2024-11-05T11:32:58.171Z] =================================================================================================================== 00:16:58.897 [2024-11-05T11:32:58.171Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 81609 00:16:58.897 [2024-11-05 11:32:57.990520] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:58.897 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 81609 00:16:59.157 [2024-11-05 11:32:58.361809] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:00.546 00:17:00.546 real 0m14.975s 00:17:00.546 user 0m18.350s 00:17:00.546 sys 0m1.926s 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.546 ************************************ 00:17:00.546 END TEST raid5f_rebuild_test 00:17:00.546 ************************************ 00:17:00.546 11:32:59 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:17:00.546 11:32:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:00.546 11:32:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:00.546 11:32:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.546 ************************************ 00:17:00.546 START TEST raid5f_rebuild_test_sb 00:17:00.546 ************************************ 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.546 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82045 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82045 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82045 ']' 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:00.547 11:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.547 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:00.547 Zero copy mechanism will not be used. 00:17:00.547 [2024-11-05 11:32:59.560899] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:17:00.547 [2024-11-05 11:32:59.561005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82045 ] 00:17:00.547 [2024-11-05 11:32:59.736115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.808 [2024-11-05 11:32:59.843327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.808 [2024-11-05 11:33:00.041265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.808 [2024-11-05 11:33:00.041296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.378 BaseBdev1_malloc 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.378 [2024-11-05 11:33:00.431700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:01.378 [2024-11-05 11:33:00.431823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.378 [2024-11-05 11:33:00.431851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:01.378 [2024-11-05 11:33:00.431862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.378 [2024-11-05 11:33:00.433931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.378 [2024-11-05 11:33:00.433970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:01.378 BaseBdev1 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.378 BaseBdev2_malloc 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.378 [2024-11-05 11:33:00.484521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:01.378 [2024-11-05 11:33:00.484626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.378 [2024-11-05 11:33:00.484647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:01.378 [2024-11-05 11:33:00.484659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.378 [2024-11-05 11:33:00.486700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.378 [2024-11-05 11:33:00.486736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:01.378 BaseBdev2 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.378 BaseBdev3_malloc 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.378 [2024-11-05 11:33:00.575976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:01.378 [2024-11-05 11:33:00.576082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.378 [2024-11-05 11:33:00.576107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:01.378 [2024-11-05 11:33:00.576118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.378 [2024-11-05 11:33:00.578134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.378 [2024-11-05 11:33:00.578177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:01.378 BaseBdev3 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.378 spare_malloc 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.378 spare_delay 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.378 [2024-11-05 11:33:00.642073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:01.378 [2024-11-05 11:33:00.642120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.378 [2024-11-05 11:33:00.642144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:01.378 [2024-11-05 11:33:00.642156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.378 [2024-11-05 11:33:00.644151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.378 [2024-11-05 11:33:00.644187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:01.378 spare 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.378 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.638 [2024-11-05 11:33:00.654115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.638 [2024-11-05 11:33:00.655824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.638 [2024-11-05 11:33:00.655885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.638 [2024-11-05 11:33:00.656052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:01.638 [2024-11-05 11:33:00.656066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:01.638 [2024-11-05 11:33:00.656309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:01.638 [2024-11-05 11:33:00.661706] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:01.638 [2024-11-05 11:33:00.661728] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:01.638 [2024-11-05 11:33:00.661881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.638 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.638 "name": "raid_bdev1", 00:17:01.638 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:01.638 "strip_size_kb": 64, 00:17:01.638 "state": "online", 00:17:01.638 "raid_level": "raid5f", 00:17:01.638 "superblock": true, 00:17:01.638 "num_base_bdevs": 3, 00:17:01.638 "num_base_bdevs_discovered": 3, 00:17:01.638 "num_base_bdevs_operational": 3, 00:17:01.638 "base_bdevs_list": [ 00:17:01.638 { 00:17:01.638 "name": "BaseBdev1", 00:17:01.638 "uuid": "4021b01d-a702-5c3f-8eff-87b856159403", 00:17:01.638 "is_configured": true, 00:17:01.638 "data_offset": 2048, 00:17:01.639 "data_size": 63488 00:17:01.639 }, 00:17:01.639 { 00:17:01.639 "name": "BaseBdev2", 00:17:01.639 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:01.639 "is_configured": true, 00:17:01.639 "data_offset": 2048, 00:17:01.639 "data_size": 63488 00:17:01.639 }, 00:17:01.639 { 00:17:01.639 "name": "BaseBdev3", 00:17:01.639 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:01.639 "is_configured": true, 00:17:01.639 "data_offset": 2048, 00:17:01.639 "data_size": 63488 00:17:01.639 } 00:17:01.639 ] 00:17:01.639 }' 00:17:01.639 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.639 11:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.899 [2024-11-05 11:33:01.055452] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:01.899 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:02.159 [2024-11-05 11:33:01.343332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:02.159 /dev/nbd0 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.159 1+0 records in 00:17:02.159 1+0 records out 00:17:02.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373016 s, 11.0 MB/s 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:02.159 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:02.729 496+0 records in 00:17:02.729 496+0 records out 00:17:02.729 65011712 bytes (65 MB, 62 MiB) copied, 0.332982 s, 195 MB/s 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:02.729 [2024-11-05 11:33:01.957722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.729 [2024-11-05 11:33:01.969590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.729 11:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.729 11:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.989 11:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.989 "name": "raid_bdev1", 00:17:02.989 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:02.989 "strip_size_kb": 64, 00:17:02.989 "state": "online", 00:17:02.989 "raid_level": "raid5f", 00:17:02.989 "superblock": true, 00:17:02.989 "num_base_bdevs": 3, 00:17:02.989 "num_base_bdevs_discovered": 2, 00:17:02.989 "num_base_bdevs_operational": 2, 00:17:02.989 "base_bdevs_list": [ 00:17:02.989 { 00:17:02.989 "name": null, 00:17:02.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.989 "is_configured": false, 00:17:02.989 "data_offset": 0, 00:17:02.989 "data_size": 63488 00:17:02.989 }, 00:17:02.989 { 00:17:02.989 "name": "BaseBdev2", 00:17:02.989 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:02.989 "is_configured": true, 00:17:02.989 "data_offset": 2048, 00:17:02.989 "data_size": 63488 00:17:02.989 }, 00:17:02.989 { 00:17:02.989 "name": "BaseBdev3", 00:17:02.989 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:02.989 "is_configured": true, 00:17:02.989 "data_offset": 2048, 00:17:02.989 "data_size": 63488 00:17:02.989 } 00:17:02.989 ] 00:17:02.989 }' 00:17:02.989 11:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.989 11:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.249 11:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:03.249 11:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.249 11:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.249 [2024-11-05 11:33:02.388890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.249 [2024-11-05 11:33:02.405985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:17:03.249 11:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.249 11:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:03.249 [2024-11-05 11:33:02.413839] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:04.188 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.188 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.188 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.188 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.188 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.188 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.188 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.188 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.188 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.188 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.188 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.188 "name": "raid_bdev1", 00:17:04.188 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:04.188 "strip_size_kb": 64, 00:17:04.188 "state": "online", 00:17:04.188 "raid_level": "raid5f", 00:17:04.188 "superblock": true, 00:17:04.188 "num_base_bdevs": 3, 00:17:04.188 "num_base_bdevs_discovered": 3, 00:17:04.188 "num_base_bdevs_operational": 3, 00:17:04.188 "process": { 00:17:04.188 "type": "rebuild", 00:17:04.188 "target": "spare", 00:17:04.188 "progress": { 00:17:04.188 "blocks": 20480, 00:17:04.188 "percent": 16 00:17:04.189 } 00:17:04.189 }, 00:17:04.189 "base_bdevs_list": [ 00:17:04.189 { 00:17:04.189 "name": "spare", 00:17:04.189 "uuid": "d4af40b0-6362-55ed-80e7-ccb0f68cbd34", 00:17:04.189 "is_configured": true, 00:17:04.189 "data_offset": 2048, 00:17:04.189 "data_size": 63488 00:17:04.189 }, 00:17:04.189 { 00:17:04.189 "name": "BaseBdev2", 00:17:04.189 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:04.189 "is_configured": true, 00:17:04.189 "data_offset": 2048, 00:17:04.189 "data_size": 63488 00:17:04.189 }, 00:17:04.189 { 00:17:04.189 "name": "BaseBdev3", 00:17:04.189 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:04.189 "is_configured": true, 00:17:04.189 "data_offset": 2048, 00:17:04.189 "data_size": 63488 00:17:04.189 } 00:17:04.189 ] 00:17:04.189 }' 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.449 [2024-11-05 11:33:03.549042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.449 [2024-11-05 11:33:03.621157] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:04.449 [2024-11-05 11:33:03.621207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.449 [2024-11-05 11:33:03.621224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.449 [2024-11-05 11:33:03.621231] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.449 "name": "raid_bdev1", 00:17:04.449 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:04.449 "strip_size_kb": 64, 00:17:04.449 "state": "online", 00:17:04.449 "raid_level": "raid5f", 00:17:04.449 "superblock": true, 00:17:04.449 "num_base_bdevs": 3, 00:17:04.449 "num_base_bdevs_discovered": 2, 00:17:04.449 "num_base_bdevs_operational": 2, 00:17:04.449 "base_bdevs_list": [ 00:17:04.449 { 00:17:04.449 "name": null, 00:17:04.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.449 "is_configured": false, 00:17:04.449 "data_offset": 0, 00:17:04.449 "data_size": 63488 00:17:04.449 }, 00:17:04.449 { 00:17:04.449 "name": "BaseBdev2", 00:17:04.449 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:04.449 "is_configured": true, 00:17:04.449 "data_offset": 2048, 00:17:04.449 "data_size": 63488 00:17:04.449 }, 00:17:04.449 { 00:17:04.449 "name": "BaseBdev3", 00:17:04.449 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:04.449 "is_configured": true, 00:17:04.449 "data_offset": 2048, 00:17:04.449 "data_size": 63488 00:17:04.449 } 00:17:04.449 ] 00:17:04.449 }' 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.449 11:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.018 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.018 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.019 "name": "raid_bdev1", 00:17:05.019 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:05.019 "strip_size_kb": 64, 00:17:05.019 "state": "online", 00:17:05.019 "raid_level": "raid5f", 00:17:05.019 "superblock": true, 00:17:05.019 "num_base_bdevs": 3, 00:17:05.019 "num_base_bdevs_discovered": 2, 00:17:05.019 "num_base_bdevs_operational": 2, 00:17:05.019 "base_bdevs_list": [ 00:17:05.019 { 00:17:05.019 "name": null, 00:17:05.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.019 "is_configured": false, 00:17:05.019 "data_offset": 0, 00:17:05.019 "data_size": 63488 00:17:05.019 }, 00:17:05.019 { 00:17:05.019 "name": "BaseBdev2", 00:17:05.019 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:05.019 "is_configured": true, 00:17:05.019 "data_offset": 2048, 00:17:05.019 "data_size": 63488 00:17:05.019 }, 00:17:05.019 { 00:17:05.019 "name": "BaseBdev3", 00:17:05.019 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:05.019 "is_configured": true, 00:17:05.019 "data_offset": 2048, 00:17:05.019 "data_size": 63488 00:17:05.019 } 00:17:05.019 ] 00:17:05.019 }' 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.019 [2024-11-05 11:33:04.228546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.019 [2024-11-05 11:33:04.243646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.019 11:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:05.019 [2024-11-05 11:33:04.250548] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.400 "name": "raid_bdev1", 00:17:06.400 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:06.400 "strip_size_kb": 64, 00:17:06.400 "state": "online", 00:17:06.400 "raid_level": "raid5f", 00:17:06.400 "superblock": true, 00:17:06.400 "num_base_bdevs": 3, 00:17:06.400 "num_base_bdevs_discovered": 3, 00:17:06.400 "num_base_bdevs_operational": 3, 00:17:06.400 "process": { 00:17:06.400 "type": "rebuild", 00:17:06.400 "target": "spare", 00:17:06.400 "progress": { 00:17:06.400 "blocks": 20480, 00:17:06.400 "percent": 16 00:17:06.400 } 00:17:06.400 }, 00:17:06.400 "base_bdevs_list": [ 00:17:06.400 { 00:17:06.400 "name": "spare", 00:17:06.400 "uuid": "d4af40b0-6362-55ed-80e7-ccb0f68cbd34", 00:17:06.400 "is_configured": true, 00:17:06.400 "data_offset": 2048, 00:17:06.400 "data_size": 63488 00:17:06.400 }, 00:17:06.400 { 00:17:06.400 "name": "BaseBdev2", 00:17:06.400 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:06.400 "is_configured": true, 00:17:06.400 "data_offset": 2048, 00:17:06.400 "data_size": 63488 00:17:06.400 }, 00:17:06.400 { 00:17:06.400 "name": "BaseBdev3", 00:17:06.400 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:06.400 "is_configured": true, 00:17:06.400 "data_offset": 2048, 00:17:06.400 "data_size": 63488 00:17:06.400 } 00:17:06.400 ] 00:17:06.400 }' 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:06.400 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=555 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.400 "name": "raid_bdev1", 00:17:06.400 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:06.400 "strip_size_kb": 64, 00:17:06.400 "state": "online", 00:17:06.400 "raid_level": "raid5f", 00:17:06.400 "superblock": true, 00:17:06.400 "num_base_bdevs": 3, 00:17:06.400 "num_base_bdevs_discovered": 3, 00:17:06.400 "num_base_bdevs_operational": 3, 00:17:06.400 "process": { 00:17:06.400 "type": "rebuild", 00:17:06.400 "target": "spare", 00:17:06.400 "progress": { 00:17:06.400 "blocks": 22528, 00:17:06.400 "percent": 17 00:17:06.400 } 00:17:06.400 }, 00:17:06.400 "base_bdevs_list": [ 00:17:06.400 { 00:17:06.400 "name": "spare", 00:17:06.400 "uuid": "d4af40b0-6362-55ed-80e7-ccb0f68cbd34", 00:17:06.400 "is_configured": true, 00:17:06.400 "data_offset": 2048, 00:17:06.400 "data_size": 63488 00:17:06.400 }, 00:17:06.400 { 00:17:06.400 "name": "BaseBdev2", 00:17:06.400 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:06.400 "is_configured": true, 00:17:06.400 "data_offset": 2048, 00:17:06.400 "data_size": 63488 00:17:06.400 }, 00:17:06.400 { 00:17:06.400 "name": "BaseBdev3", 00:17:06.400 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:06.400 "is_configured": true, 00:17:06.400 "data_offset": 2048, 00:17:06.400 "data_size": 63488 00:17:06.400 } 00:17:06.400 ] 00:17:06.400 }' 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.400 11:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:07.339 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:07.339 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.339 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.339 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.339 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.339 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.339 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.339 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.339 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.339 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.339 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.339 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.339 "name": "raid_bdev1", 00:17:07.339 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:07.339 "strip_size_kb": 64, 00:17:07.339 "state": "online", 00:17:07.339 "raid_level": "raid5f", 00:17:07.339 "superblock": true, 00:17:07.339 "num_base_bdevs": 3, 00:17:07.339 "num_base_bdevs_discovered": 3, 00:17:07.339 "num_base_bdevs_operational": 3, 00:17:07.339 "process": { 00:17:07.339 "type": "rebuild", 00:17:07.339 "target": "spare", 00:17:07.339 "progress": { 00:17:07.339 "blocks": 45056, 00:17:07.340 "percent": 35 00:17:07.340 } 00:17:07.340 }, 00:17:07.340 "base_bdevs_list": [ 00:17:07.340 { 00:17:07.340 "name": "spare", 00:17:07.340 "uuid": "d4af40b0-6362-55ed-80e7-ccb0f68cbd34", 00:17:07.340 "is_configured": true, 00:17:07.340 "data_offset": 2048, 00:17:07.340 "data_size": 63488 00:17:07.340 }, 00:17:07.340 { 00:17:07.340 "name": "BaseBdev2", 00:17:07.340 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:07.340 "is_configured": true, 00:17:07.340 "data_offset": 2048, 00:17:07.340 "data_size": 63488 00:17:07.340 }, 00:17:07.340 { 00:17:07.340 "name": "BaseBdev3", 00:17:07.340 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:07.340 "is_configured": true, 00:17:07.340 "data_offset": 2048, 00:17:07.340 "data_size": 63488 00:17:07.340 } 00:17:07.340 ] 00:17:07.340 }' 00:17:07.340 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.340 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.340 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.599 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.599 11:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.538 "name": "raid_bdev1", 00:17:08.538 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:08.538 "strip_size_kb": 64, 00:17:08.538 "state": "online", 00:17:08.538 "raid_level": "raid5f", 00:17:08.538 "superblock": true, 00:17:08.538 "num_base_bdevs": 3, 00:17:08.538 "num_base_bdevs_discovered": 3, 00:17:08.538 "num_base_bdevs_operational": 3, 00:17:08.538 "process": { 00:17:08.538 "type": "rebuild", 00:17:08.538 "target": "spare", 00:17:08.538 "progress": { 00:17:08.538 "blocks": 67584, 00:17:08.538 "percent": 53 00:17:08.538 } 00:17:08.538 }, 00:17:08.538 "base_bdevs_list": [ 00:17:08.538 { 00:17:08.538 "name": "spare", 00:17:08.538 "uuid": "d4af40b0-6362-55ed-80e7-ccb0f68cbd34", 00:17:08.538 "is_configured": true, 00:17:08.538 "data_offset": 2048, 00:17:08.538 "data_size": 63488 00:17:08.538 }, 00:17:08.538 { 00:17:08.538 "name": "BaseBdev2", 00:17:08.538 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:08.538 "is_configured": true, 00:17:08.538 "data_offset": 2048, 00:17:08.538 "data_size": 63488 00:17:08.538 }, 00:17:08.538 { 00:17:08.538 "name": "BaseBdev3", 00:17:08.538 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:08.538 "is_configured": true, 00:17:08.538 "data_offset": 2048, 00:17:08.538 "data_size": 63488 00:17:08.538 } 00:17:08.538 ] 00:17:08.538 }' 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.538 11:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.919 "name": "raid_bdev1", 00:17:09.919 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:09.919 "strip_size_kb": 64, 00:17:09.919 "state": "online", 00:17:09.919 "raid_level": "raid5f", 00:17:09.919 "superblock": true, 00:17:09.919 "num_base_bdevs": 3, 00:17:09.919 "num_base_bdevs_discovered": 3, 00:17:09.919 "num_base_bdevs_operational": 3, 00:17:09.919 "process": { 00:17:09.919 "type": "rebuild", 00:17:09.919 "target": "spare", 00:17:09.919 "progress": { 00:17:09.919 "blocks": 92160, 00:17:09.919 "percent": 72 00:17:09.919 } 00:17:09.919 }, 00:17:09.919 "base_bdevs_list": [ 00:17:09.919 { 00:17:09.919 "name": "spare", 00:17:09.919 "uuid": "d4af40b0-6362-55ed-80e7-ccb0f68cbd34", 00:17:09.919 "is_configured": true, 00:17:09.919 "data_offset": 2048, 00:17:09.919 "data_size": 63488 00:17:09.919 }, 00:17:09.919 { 00:17:09.919 "name": "BaseBdev2", 00:17:09.919 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:09.919 "is_configured": true, 00:17:09.919 "data_offset": 2048, 00:17:09.919 "data_size": 63488 00:17:09.919 }, 00:17:09.919 { 00:17:09.919 "name": "BaseBdev3", 00:17:09.919 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:09.919 "is_configured": true, 00:17:09.919 "data_offset": 2048, 00:17:09.919 "data_size": 63488 00:17:09.919 } 00:17:09.919 ] 00:17:09.919 }' 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.919 11:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.856 11:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.856 11:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.856 11:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.856 11:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.856 11:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.856 11:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.856 11:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.856 11:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.856 11:33:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.856 11:33:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.856 11:33:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.856 11:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.856 "name": "raid_bdev1", 00:17:10.856 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:10.856 "strip_size_kb": 64, 00:17:10.856 "state": "online", 00:17:10.856 "raid_level": "raid5f", 00:17:10.856 "superblock": true, 00:17:10.856 "num_base_bdevs": 3, 00:17:10.856 "num_base_bdevs_discovered": 3, 00:17:10.856 "num_base_bdevs_operational": 3, 00:17:10.856 "process": { 00:17:10.856 "type": "rebuild", 00:17:10.856 "target": "spare", 00:17:10.856 "progress": { 00:17:10.856 "blocks": 114688, 00:17:10.856 "percent": 90 00:17:10.856 } 00:17:10.856 }, 00:17:10.856 "base_bdevs_list": [ 00:17:10.856 { 00:17:10.856 "name": "spare", 00:17:10.856 "uuid": "d4af40b0-6362-55ed-80e7-ccb0f68cbd34", 00:17:10.856 "is_configured": true, 00:17:10.856 "data_offset": 2048, 00:17:10.856 "data_size": 63488 00:17:10.856 }, 00:17:10.856 { 00:17:10.856 "name": "BaseBdev2", 00:17:10.856 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:10.856 "is_configured": true, 00:17:10.856 "data_offset": 2048, 00:17:10.856 "data_size": 63488 00:17:10.856 }, 00:17:10.856 { 00:17:10.856 "name": "BaseBdev3", 00:17:10.856 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:10.856 "is_configured": true, 00:17:10.856 "data_offset": 2048, 00:17:10.856 "data_size": 63488 00:17:10.856 } 00:17:10.856 ] 00:17:10.856 }' 00:17:10.856 11:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.856 11:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.856 11:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.856 11:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.856 11:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.426 [2024-11-05 11:33:10.490695] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:11.426 [2024-11-05 11:33:10.490767] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:11.426 [2024-11-05 11:33:10.490870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.994 "name": "raid_bdev1", 00:17:11.994 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:11.994 "strip_size_kb": 64, 00:17:11.994 "state": "online", 00:17:11.994 "raid_level": "raid5f", 00:17:11.994 "superblock": true, 00:17:11.994 "num_base_bdevs": 3, 00:17:11.994 "num_base_bdevs_discovered": 3, 00:17:11.994 "num_base_bdevs_operational": 3, 00:17:11.994 "base_bdevs_list": [ 00:17:11.994 { 00:17:11.994 "name": "spare", 00:17:11.994 "uuid": "d4af40b0-6362-55ed-80e7-ccb0f68cbd34", 00:17:11.994 "is_configured": true, 00:17:11.994 "data_offset": 2048, 00:17:11.994 "data_size": 63488 00:17:11.994 }, 00:17:11.994 { 00:17:11.994 "name": "BaseBdev2", 00:17:11.994 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:11.994 "is_configured": true, 00:17:11.994 "data_offset": 2048, 00:17:11.994 "data_size": 63488 00:17:11.994 }, 00:17:11.994 { 00:17:11.994 "name": "BaseBdev3", 00:17:11.994 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:11.994 "is_configured": true, 00:17:11.994 "data_offset": 2048, 00:17:11.994 "data_size": 63488 00:17:11.994 } 00:17:11.994 ] 00:17:11.994 }' 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.994 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.254 "name": "raid_bdev1", 00:17:12.254 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:12.254 "strip_size_kb": 64, 00:17:12.254 "state": "online", 00:17:12.254 "raid_level": "raid5f", 00:17:12.254 "superblock": true, 00:17:12.254 "num_base_bdevs": 3, 00:17:12.254 "num_base_bdevs_discovered": 3, 00:17:12.254 "num_base_bdevs_operational": 3, 00:17:12.254 "base_bdevs_list": [ 00:17:12.254 { 00:17:12.254 "name": "spare", 00:17:12.254 "uuid": "d4af40b0-6362-55ed-80e7-ccb0f68cbd34", 00:17:12.254 "is_configured": true, 00:17:12.254 "data_offset": 2048, 00:17:12.254 "data_size": 63488 00:17:12.254 }, 00:17:12.254 { 00:17:12.254 "name": "BaseBdev2", 00:17:12.254 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:12.254 "is_configured": true, 00:17:12.254 "data_offset": 2048, 00:17:12.254 "data_size": 63488 00:17:12.254 }, 00:17:12.254 { 00:17:12.254 "name": "BaseBdev3", 00:17:12.254 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:12.254 "is_configured": true, 00:17:12.254 "data_offset": 2048, 00:17:12.254 "data_size": 63488 00:17:12.254 } 00:17:12.254 ] 00:17:12.254 }' 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.254 "name": "raid_bdev1", 00:17:12.254 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:12.254 "strip_size_kb": 64, 00:17:12.254 "state": "online", 00:17:12.254 "raid_level": "raid5f", 00:17:12.254 "superblock": true, 00:17:12.254 "num_base_bdevs": 3, 00:17:12.254 "num_base_bdevs_discovered": 3, 00:17:12.254 "num_base_bdevs_operational": 3, 00:17:12.254 "base_bdevs_list": [ 00:17:12.254 { 00:17:12.254 "name": "spare", 00:17:12.254 "uuid": "d4af40b0-6362-55ed-80e7-ccb0f68cbd34", 00:17:12.254 "is_configured": true, 00:17:12.254 "data_offset": 2048, 00:17:12.254 "data_size": 63488 00:17:12.254 }, 00:17:12.254 { 00:17:12.254 "name": "BaseBdev2", 00:17:12.254 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:12.254 "is_configured": true, 00:17:12.254 "data_offset": 2048, 00:17:12.254 "data_size": 63488 00:17:12.254 }, 00:17:12.254 { 00:17:12.254 "name": "BaseBdev3", 00:17:12.254 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:12.254 "is_configured": true, 00:17:12.254 "data_offset": 2048, 00:17:12.254 "data_size": 63488 00:17:12.254 } 00:17:12.254 ] 00:17:12.254 }' 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.254 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.824 [2024-11-05 11:33:11.875256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.824 [2024-11-05 11:33:11.875329] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.824 [2024-11-05 11:33:11.875433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.824 [2024-11-05 11:33:11.875549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.824 [2024-11-05 11:33:11.875629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.824 11:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:13.084 /dev/nbd0 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:13.084 1+0 records in 00:17:13.084 1+0 records out 00:17:13.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387618 s, 10.6 MB/s 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:13.084 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:13.344 /dev/nbd1 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:13.344 1+0 records in 00:17:13.344 1+0 records out 00:17:13.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246342 s, 16.6 MB/s 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.344 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:13.605 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:13.605 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:13.605 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:13.605 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.605 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.605 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.605 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:13.605 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.605 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.605 11:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:13.912 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:13.912 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.913 [2024-11-05 11:33:13.036523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:13.913 [2024-11-05 11:33:13.036591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.913 [2024-11-05 11:33:13.036613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:13.913 [2024-11-05 11:33:13.036624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.913 [2024-11-05 11:33:13.038844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.913 [2024-11-05 11:33:13.038886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:13.913 [2024-11-05 11:33:13.038983] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:13.913 [2024-11-05 11:33:13.039041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.913 [2024-11-05 11:33:13.039246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.913 [2024-11-05 11:33:13.039361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:13.913 spare 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.913 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:13.914 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.914 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.914 [2024-11-05 11:33:13.139296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:13.914 [2024-11-05 11:33:13.139384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:13.914 [2024-11-05 11:33:13.139720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:13.914 [2024-11-05 11:33:13.145789] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:13.914 [2024-11-05 11:33:13.145810] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:13.914 [2024-11-05 11:33:13.146000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.175 "name": "raid_bdev1", 00:17:14.175 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:14.175 "strip_size_kb": 64, 00:17:14.175 "state": "online", 00:17:14.175 "raid_level": "raid5f", 00:17:14.175 "superblock": true, 00:17:14.175 "num_base_bdevs": 3, 00:17:14.175 "num_base_bdevs_discovered": 3, 00:17:14.175 "num_base_bdevs_operational": 3, 00:17:14.175 "base_bdevs_list": [ 00:17:14.175 { 00:17:14.175 "name": "spare", 00:17:14.175 "uuid": "d4af40b0-6362-55ed-80e7-ccb0f68cbd34", 00:17:14.175 "is_configured": true, 00:17:14.175 "data_offset": 2048, 00:17:14.175 "data_size": 63488 00:17:14.175 }, 00:17:14.175 { 00:17:14.175 "name": "BaseBdev2", 00:17:14.175 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:14.175 "is_configured": true, 00:17:14.175 "data_offset": 2048, 00:17:14.175 "data_size": 63488 00:17:14.175 }, 00:17:14.175 { 00:17:14.175 "name": "BaseBdev3", 00:17:14.175 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:14.175 "is_configured": true, 00:17:14.175 "data_offset": 2048, 00:17:14.175 "data_size": 63488 00:17:14.175 } 00:17:14.175 ] 00:17:14.175 }' 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.175 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.435 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.435 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.435 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.435 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.435 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.435 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.435 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.435 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.435 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.435 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.435 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.435 "name": "raid_bdev1", 00:17:14.435 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:14.435 "strip_size_kb": 64, 00:17:14.435 "state": "online", 00:17:14.435 "raid_level": "raid5f", 00:17:14.435 "superblock": true, 00:17:14.435 "num_base_bdevs": 3, 00:17:14.435 "num_base_bdevs_discovered": 3, 00:17:14.435 "num_base_bdevs_operational": 3, 00:17:14.435 "base_bdevs_list": [ 00:17:14.435 { 00:17:14.435 "name": "spare", 00:17:14.435 "uuid": "d4af40b0-6362-55ed-80e7-ccb0f68cbd34", 00:17:14.435 "is_configured": true, 00:17:14.435 "data_offset": 2048, 00:17:14.435 "data_size": 63488 00:17:14.435 }, 00:17:14.435 { 00:17:14.435 "name": "BaseBdev2", 00:17:14.435 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:14.436 "is_configured": true, 00:17:14.436 "data_offset": 2048, 00:17:14.436 "data_size": 63488 00:17:14.436 }, 00:17:14.436 { 00:17:14.436 "name": "BaseBdev3", 00:17:14.436 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:14.436 "is_configured": true, 00:17:14.436 "data_offset": 2048, 00:17:14.436 "data_size": 63488 00:17:14.436 } 00:17:14.436 ] 00:17:14.436 }' 00:17:14.436 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.436 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.436 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.695 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.695 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:14.695 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.695 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.695 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.695 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.695 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.695 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:14.695 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.695 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.695 [2024-11-05 11:33:13.800041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.695 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.696 "name": "raid_bdev1", 00:17:14.696 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:14.696 "strip_size_kb": 64, 00:17:14.696 "state": "online", 00:17:14.696 "raid_level": "raid5f", 00:17:14.696 "superblock": true, 00:17:14.696 "num_base_bdevs": 3, 00:17:14.696 "num_base_bdevs_discovered": 2, 00:17:14.696 "num_base_bdevs_operational": 2, 00:17:14.696 "base_bdevs_list": [ 00:17:14.696 { 00:17:14.696 "name": null, 00:17:14.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.696 "is_configured": false, 00:17:14.696 "data_offset": 0, 00:17:14.696 "data_size": 63488 00:17:14.696 }, 00:17:14.696 { 00:17:14.696 "name": "BaseBdev2", 00:17:14.696 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:14.696 "is_configured": true, 00:17:14.696 "data_offset": 2048, 00:17:14.696 "data_size": 63488 00:17:14.696 }, 00:17:14.696 { 00:17:14.696 "name": "BaseBdev3", 00:17:14.696 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:14.696 "is_configured": true, 00:17:14.696 "data_offset": 2048, 00:17:14.696 "data_size": 63488 00:17:14.696 } 00:17:14.696 ] 00:17:14.696 }' 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.696 11:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.265 11:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:15.265 11:33:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.265 11:33:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.265 [2024-11-05 11:33:14.271281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.265 [2024-11-05 11:33:14.271517] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:15.265 [2024-11-05 11:33:14.271591] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:15.265 [2024-11-05 11:33:14.271652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.265 [2024-11-05 11:33:14.286774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:17:15.265 11:33:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.265 11:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:15.265 [2024-11-05 11:33:14.294041] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.203 "name": "raid_bdev1", 00:17:16.203 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:16.203 "strip_size_kb": 64, 00:17:16.203 "state": "online", 00:17:16.203 "raid_level": "raid5f", 00:17:16.203 "superblock": true, 00:17:16.203 "num_base_bdevs": 3, 00:17:16.203 "num_base_bdevs_discovered": 3, 00:17:16.203 "num_base_bdevs_operational": 3, 00:17:16.203 "process": { 00:17:16.203 "type": "rebuild", 00:17:16.203 "target": "spare", 00:17:16.203 "progress": { 00:17:16.203 "blocks": 20480, 00:17:16.203 "percent": 16 00:17:16.203 } 00:17:16.203 }, 00:17:16.203 "base_bdevs_list": [ 00:17:16.203 { 00:17:16.203 "name": "spare", 00:17:16.203 "uuid": "d4af40b0-6362-55ed-80e7-ccb0f68cbd34", 00:17:16.203 "is_configured": true, 00:17:16.203 "data_offset": 2048, 00:17:16.203 "data_size": 63488 00:17:16.203 }, 00:17:16.203 { 00:17:16.203 "name": "BaseBdev2", 00:17:16.203 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:16.203 "is_configured": true, 00:17:16.203 "data_offset": 2048, 00:17:16.203 "data_size": 63488 00:17:16.203 }, 00:17:16.203 { 00:17:16.203 "name": "BaseBdev3", 00:17:16.203 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:16.203 "is_configured": true, 00:17:16.203 "data_offset": 2048, 00:17:16.203 "data_size": 63488 00:17:16.203 } 00:17:16.203 ] 00:17:16.203 }' 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.203 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.203 [2024-11-05 11:33:15.448790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.463 [2024-11-05 11:33:15.501646] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:16.463 [2024-11-05 11:33:15.501705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.463 [2024-11-05 11:33:15.501719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.463 [2024-11-05 11:33:15.501728] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.463 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.463 "name": "raid_bdev1", 00:17:16.463 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:16.463 "strip_size_kb": 64, 00:17:16.463 "state": "online", 00:17:16.463 "raid_level": "raid5f", 00:17:16.463 "superblock": true, 00:17:16.463 "num_base_bdevs": 3, 00:17:16.463 "num_base_bdevs_discovered": 2, 00:17:16.464 "num_base_bdevs_operational": 2, 00:17:16.464 "base_bdevs_list": [ 00:17:16.464 { 00:17:16.464 "name": null, 00:17:16.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.464 "is_configured": false, 00:17:16.464 "data_offset": 0, 00:17:16.464 "data_size": 63488 00:17:16.464 }, 00:17:16.464 { 00:17:16.464 "name": "BaseBdev2", 00:17:16.464 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:16.464 "is_configured": true, 00:17:16.464 "data_offset": 2048, 00:17:16.464 "data_size": 63488 00:17:16.464 }, 00:17:16.464 { 00:17:16.464 "name": "BaseBdev3", 00:17:16.464 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:16.464 "is_configured": true, 00:17:16.464 "data_offset": 2048, 00:17:16.464 "data_size": 63488 00:17:16.464 } 00:17:16.464 ] 00:17:16.464 }' 00:17:16.464 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.464 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.032 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:17.032 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.032 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.032 [2024-11-05 11:33:16.012505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:17.032 [2024-11-05 11:33:16.012569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.032 [2024-11-05 11:33:16.012587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:17.032 [2024-11-05 11:33:16.012600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.032 [2024-11-05 11:33:16.013054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.032 [2024-11-05 11:33:16.013075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:17.032 [2024-11-05 11:33:16.013178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:17.032 [2024-11-05 11:33:16.013194] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:17.032 [2024-11-05 11:33:16.013202] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:17.032 [2024-11-05 11:33:16.013226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.032 [2024-11-05 11:33:16.028102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:17:17.032 spare 00:17:17.032 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.032 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:17.032 [2024-11-05 11:33:16.035020] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.969 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.969 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.969 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.969 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.969 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.969 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.969 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.969 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.969 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.969 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.969 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.969 "name": "raid_bdev1", 00:17:17.969 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:17.969 "strip_size_kb": 64, 00:17:17.969 "state": "online", 00:17:17.969 "raid_level": "raid5f", 00:17:17.969 "superblock": true, 00:17:17.969 "num_base_bdevs": 3, 00:17:17.969 "num_base_bdevs_discovered": 3, 00:17:17.969 "num_base_bdevs_operational": 3, 00:17:17.969 "process": { 00:17:17.969 "type": "rebuild", 00:17:17.969 "target": "spare", 00:17:17.969 "progress": { 00:17:17.969 "blocks": 20480, 00:17:17.969 "percent": 16 00:17:17.969 } 00:17:17.969 }, 00:17:17.969 "base_bdevs_list": [ 00:17:17.969 { 00:17:17.970 "name": "spare", 00:17:17.970 "uuid": "d4af40b0-6362-55ed-80e7-ccb0f68cbd34", 00:17:17.970 "is_configured": true, 00:17:17.970 "data_offset": 2048, 00:17:17.970 "data_size": 63488 00:17:17.970 }, 00:17:17.970 { 00:17:17.970 "name": "BaseBdev2", 00:17:17.970 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:17.970 "is_configured": true, 00:17:17.970 "data_offset": 2048, 00:17:17.970 "data_size": 63488 00:17:17.970 }, 00:17:17.970 { 00:17:17.970 "name": "BaseBdev3", 00:17:17.970 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:17.970 "is_configured": true, 00:17:17.970 "data_offset": 2048, 00:17:17.970 "data_size": 63488 00:17:17.970 } 00:17:17.970 ] 00:17:17.970 }' 00:17:17.970 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.970 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.970 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.970 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.970 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:17.970 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.970 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.970 [2024-11-05 11:33:17.193901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.970 [2024-11-05 11:33:17.241987] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.970 [2024-11-05 11:33:17.242037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.970 [2024-11-05 11:33:17.242054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.970 [2024-11-05 11:33:17.242061] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.229 "name": "raid_bdev1", 00:17:18.229 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:18.229 "strip_size_kb": 64, 00:17:18.229 "state": "online", 00:17:18.229 "raid_level": "raid5f", 00:17:18.229 "superblock": true, 00:17:18.229 "num_base_bdevs": 3, 00:17:18.229 "num_base_bdevs_discovered": 2, 00:17:18.229 "num_base_bdevs_operational": 2, 00:17:18.229 "base_bdevs_list": [ 00:17:18.229 { 00:17:18.229 "name": null, 00:17:18.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.229 "is_configured": false, 00:17:18.229 "data_offset": 0, 00:17:18.229 "data_size": 63488 00:17:18.229 }, 00:17:18.229 { 00:17:18.229 "name": "BaseBdev2", 00:17:18.229 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:18.229 "is_configured": true, 00:17:18.229 "data_offset": 2048, 00:17:18.229 "data_size": 63488 00:17:18.229 }, 00:17:18.229 { 00:17:18.229 "name": "BaseBdev3", 00:17:18.229 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:18.229 "is_configured": true, 00:17:18.229 "data_offset": 2048, 00:17:18.229 "data_size": 63488 00:17:18.229 } 00:17:18.229 ] 00:17:18.229 }' 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.229 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.489 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.489 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.489 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.489 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.489 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.489 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.489 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.489 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.489 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.489 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.489 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.489 "name": "raid_bdev1", 00:17:18.489 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:18.489 "strip_size_kb": 64, 00:17:18.489 "state": "online", 00:17:18.489 "raid_level": "raid5f", 00:17:18.489 "superblock": true, 00:17:18.489 "num_base_bdevs": 3, 00:17:18.490 "num_base_bdevs_discovered": 2, 00:17:18.490 "num_base_bdevs_operational": 2, 00:17:18.490 "base_bdevs_list": [ 00:17:18.490 { 00:17:18.490 "name": null, 00:17:18.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.490 "is_configured": false, 00:17:18.490 "data_offset": 0, 00:17:18.490 "data_size": 63488 00:17:18.490 }, 00:17:18.490 { 00:17:18.490 "name": "BaseBdev2", 00:17:18.490 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:18.490 "is_configured": true, 00:17:18.490 "data_offset": 2048, 00:17:18.490 "data_size": 63488 00:17:18.490 }, 00:17:18.490 { 00:17:18.490 "name": "BaseBdev3", 00:17:18.490 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:18.490 "is_configured": true, 00:17:18.490 "data_offset": 2048, 00:17:18.490 "data_size": 63488 00:17:18.490 } 00:17:18.490 ] 00:17:18.490 }' 00:17:18.490 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.749 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.749 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.749 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.749 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:18.749 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.749 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.749 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.749 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:18.749 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.749 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.749 [2024-11-05 11:33:17.823235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:18.749 [2024-11-05 11:33:17.823355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.749 [2024-11-05 11:33:17.823381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:18.749 [2024-11-05 11:33:17.823391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.749 [2024-11-05 11:33:17.823829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.749 [2024-11-05 11:33:17.823846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:18.749 [2024-11-05 11:33:17.823922] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:18.749 [2024-11-05 11:33:17.823935] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:18.749 [2024-11-05 11:33:17.823959] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:18.749 [2024-11-05 11:33:17.823969] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:18.749 BaseBdev1 00:17:18.749 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.749 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.688 "name": "raid_bdev1", 00:17:19.688 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:19.688 "strip_size_kb": 64, 00:17:19.688 "state": "online", 00:17:19.688 "raid_level": "raid5f", 00:17:19.688 "superblock": true, 00:17:19.688 "num_base_bdevs": 3, 00:17:19.688 "num_base_bdevs_discovered": 2, 00:17:19.688 "num_base_bdevs_operational": 2, 00:17:19.688 "base_bdevs_list": [ 00:17:19.688 { 00:17:19.688 "name": null, 00:17:19.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.688 "is_configured": false, 00:17:19.688 "data_offset": 0, 00:17:19.688 "data_size": 63488 00:17:19.688 }, 00:17:19.688 { 00:17:19.688 "name": "BaseBdev2", 00:17:19.688 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:19.688 "is_configured": true, 00:17:19.688 "data_offset": 2048, 00:17:19.688 "data_size": 63488 00:17:19.688 }, 00:17:19.688 { 00:17:19.688 "name": "BaseBdev3", 00:17:19.688 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:19.688 "is_configured": true, 00:17:19.688 "data_offset": 2048, 00:17:19.688 "data_size": 63488 00:17:19.688 } 00:17:19.688 ] 00:17:19.688 }' 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.688 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.258 "name": "raid_bdev1", 00:17:20.258 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:20.258 "strip_size_kb": 64, 00:17:20.258 "state": "online", 00:17:20.258 "raid_level": "raid5f", 00:17:20.258 "superblock": true, 00:17:20.258 "num_base_bdevs": 3, 00:17:20.258 "num_base_bdevs_discovered": 2, 00:17:20.258 "num_base_bdevs_operational": 2, 00:17:20.258 "base_bdevs_list": [ 00:17:20.258 { 00:17:20.258 "name": null, 00:17:20.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.258 "is_configured": false, 00:17:20.258 "data_offset": 0, 00:17:20.258 "data_size": 63488 00:17:20.258 }, 00:17:20.258 { 00:17:20.258 "name": "BaseBdev2", 00:17:20.258 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:20.258 "is_configured": true, 00:17:20.258 "data_offset": 2048, 00:17:20.258 "data_size": 63488 00:17:20.258 }, 00:17:20.258 { 00:17:20.258 "name": "BaseBdev3", 00:17:20.258 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:20.258 "is_configured": true, 00:17:20.258 "data_offset": 2048, 00:17:20.258 "data_size": 63488 00:17:20.258 } 00:17:20.258 ] 00:17:20.258 }' 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.258 [2024-11-05 11:33:19.419277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.258 [2024-11-05 11:33:19.419499] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:20.258 [2024-11-05 11:33:19.419567] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:20.258 request: 00:17:20.258 { 00:17:20.258 "base_bdev": "BaseBdev1", 00:17:20.258 "raid_bdev": "raid_bdev1", 00:17:20.258 "method": "bdev_raid_add_base_bdev", 00:17:20.258 "req_id": 1 00:17:20.258 } 00:17:20.258 Got JSON-RPC error response 00:17:20.258 response: 00:17:20.258 { 00:17:20.258 "code": -22, 00:17:20.258 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:20.258 } 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.258 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:21.205 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:21.205 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.205 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.205 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.205 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.205 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:21.205 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.205 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.205 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.205 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.205 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.205 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.205 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.205 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.205 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.464 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.464 "name": "raid_bdev1", 00:17:21.464 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:21.464 "strip_size_kb": 64, 00:17:21.464 "state": "online", 00:17:21.464 "raid_level": "raid5f", 00:17:21.464 "superblock": true, 00:17:21.464 "num_base_bdevs": 3, 00:17:21.464 "num_base_bdevs_discovered": 2, 00:17:21.464 "num_base_bdevs_operational": 2, 00:17:21.464 "base_bdevs_list": [ 00:17:21.464 { 00:17:21.464 "name": null, 00:17:21.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.464 "is_configured": false, 00:17:21.464 "data_offset": 0, 00:17:21.464 "data_size": 63488 00:17:21.464 }, 00:17:21.464 { 00:17:21.464 "name": "BaseBdev2", 00:17:21.464 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:21.464 "is_configured": true, 00:17:21.464 "data_offset": 2048, 00:17:21.464 "data_size": 63488 00:17:21.464 }, 00:17:21.464 { 00:17:21.464 "name": "BaseBdev3", 00:17:21.464 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:21.464 "is_configured": true, 00:17:21.464 "data_offset": 2048, 00:17:21.464 "data_size": 63488 00:17:21.464 } 00:17:21.464 ] 00:17:21.464 }' 00:17:21.464 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.464 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.723 "name": "raid_bdev1", 00:17:21.723 "uuid": "d35b4fc8-fe78-449c-becf-cd3a996e2789", 00:17:21.723 "strip_size_kb": 64, 00:17:21.723 "state": "online", 00:17:21.723 "raid_level": "raid5f", 00:17:21.723 "superblock": true, 00:17:21.723 "num_base_bdevs": 3, 00:17:21.723 "num_base_bdevs_discovered": 2, 00:17:21.723 "num_base_bdevs_operational": 2, 00:17:21.723 "base_bdevs_list": [ 00:17:21.723 { 00:17:21.723 "name": null, 00:17:21.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.723 "is_configured": false, 00:17:21.723 "data_offset": 0, 00:17:21.723 "data_size": 63488 00:17:21.723 }, 00:17:21.723 { 00:17:21.723 "name": "BaseBdev2", 00:17:21.723 "uuid": "ea123edf-abc2-5d7b-847a-3ac44c59461b", 00:17:21.723 "is_configured": true, 00:17:21.723 "data_offset": 2048, 00:17:21.723 "data_size": 63488 00:17:21.723 }, 00:17:21.723 { 00:17:21.723 "name": "BaseBdev3", 00:17:21.723 "uuid": "4976509d-eed7-5974-92c6-6e9065f70b45", 00:17:21.723 "is_configured": true, 00:17:21.723 "data_offset": 2048, 00:17:21.723 "data_size": 63488 00:17:21.723 } 00:17:21.723 ] 00:17:21.723 }' 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82045 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82045 ']' 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82045 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:21.723 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82045 00:17:21.982 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:21.982 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:21.982 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82045' 00:17:21.982 killing process with pid 82045 00:17:21.982 Received shutdown signal, test time was about 60.000000 seconds 00:17:21.982 00:17:21.982 Latency(us) 00:17:21.982 [2024-11-05T11:33:21.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.982 [2024-11-05T11:33:21.256Z] =================================================================================================================== 00:17:21.982 [2024-11-05T11:33:21.256Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.982 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82045 00:17:21.982 [2024-11-05 11:33:21.012194] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.982 [2024-11-05 11:33:21.012318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.982 [2024-11-05 11:33:21.012383] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.982 [2024-11-05 11:33:21.012397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:21.982 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82045 00:17:22.241 [2024-11-05 11:33:21.388935] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.179 11:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:23.179 ************************************ 00:17:23.179 END TEST raid5f_rebuild_test_sb 00:17:23.179 ************************************ 00:17:23.179 00:17:23.179 real 0m22.941s 00:17:23.179 user 0m29.362s 00:17:23.179 sys 0m2.708s 00:17:23.179 11:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:23.179 11:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.440 11:33:22 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:23.440 11:33:22 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:17:23.440 11:33:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:23.440 11:33:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:23.440 11:33:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.440 ************************************ 00:17:23.440 START TEST raid5f_state_function_test 00:17:23.440 ************************************ 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82796 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82796' 00:17:23.440 Process raid pid: 82796 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82796 00:17:23.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 82796 ']' 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:23.440 11:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.440 [2024-11-05 11:33:22.576846] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:17:23.440 [2024-11-05 11:33:22.577048] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.700 [2024-11-05 11:33:22.750925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.700 [2024-11-05 11:33:22.856096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.958 [2024-11-05 11:33:23.051863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.958 [2024-11-05 11:33:23.051896] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.218 [2024-11-05 11:33:23.391312] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.218 [2024-11-05 11:33:23.391363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.218 [2024-11-05 11:33:23.391373] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.218 [2024-11-05 11:33:23.391383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.218 [2024-11-05 11:33:23.391389] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.218 [2024-11-05 11:33:23.391397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.218 [2024-11-05 11:33:23.391403] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:24.218 [2024-11-05 11:33:23.391411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.218 "name": "Existed_Raid", 00:17:24.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.218 "strip_size_kb": 64, 00:17:24.218 "state": "configuring", 00:17:24.218 "raid_level": "raid5f", 00:17:24.218 "superblock": false, 00:17:24.218 "num_base_bdevs": 4, 00:17:24.218 "num_base_bdevs_discovered": 0, 00:17:24.218 "num_base_bdevs_operational": 4, 00:17:24.218 "base_bdevs_list": [ 00:17:24.218 { 00:17:24.218 "name": "BaseBdev1", 00:17:24.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.218 "is_configured": false, 00:17:24.218 "data_offset": 0, 00:17:24.218 "data_size": 0 00:17:24.218 }, 00:17:24.218 { 00:17:24.218 "name": "BaseBdev2", 00:17:24.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.218 "is_configured": false, 00:17:24.218 "data_offset": 0, 00:17:24.218 "data_size": 0 00:17:24.218 }, 00:17:24.218 { 00:17:24.218 "name": "BaseBdev3", 00:17:24.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.218 "is_configured": false, 00:17:24.218 "data_offset": 0, 00:17:24.218 "data_size": 0 00:17:24.218 }, 00:17:24.218 { 00:17:24.218 "name": "BaseBdev4", 00:17:24.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.218 "is_configured": false, 00:17:24.218 "data_offset": 0, 00:17:24.218 "data_size": 0 00:17:24.218 } 00:17:24.218 ] 00:17:24.218 }' 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.218 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.787 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:24.787 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.787 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.787 [2024-11-05 11:33:23.866417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.787 [2024-11-05 11:33:23.866516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:24.787 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.787 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:24.787 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.787 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.787 [2024-11-05 11:33:23.874424] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.787 [2024-11-05 11:33:23.874514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.787 [2024-11-05 11:33:23.874541] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.788 [2024-11-05 11:33:23.874563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.788 [2024-11-05 11:33:23.874580] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.788 [2024-11-05 11:33:23.874600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.788 [2024-11-05 11:33:23.874618] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:24.788 [2024-11-05 11:33:23.874638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.788 [2024-11-05 11:33:23.917366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.788 BaseBdev1 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.788 [ 00:17:24.788 { 00:17:24.788 "name": "BaseBdev1", 00:17:24.788 "aliases": [ 00:17:24.788 "9485648f-56a0-4b81-b29e-2fe0f9290976" 00:17:24.788 ], 00:17:24.788 "product_name": "Malloc disk", 00:17:24.788 "block_size": 512, 00:17:24.788 "num_blocks": 65536, 00:17:24.788 "uuid": "9485648f-56a0-4b81-b29e-2fe0f9290976", 00:17:24.788 "assigned_rate_limits": { 00:17:24.788 "rw_ios_per_sec": 0, 00:17:24.788 "rw_mbytes_per_sec": 0, 00:17:24.788 "r_mbytes_per_sec": 0, 00:17:24.788 "w_mbytes_per_sec": 0 00:17:24.788 }, 00:17:24.788 "claimed": true, 00:17:24.788 "claim_type": "exclusive_write", 00:17:24.788 "zoned": false, 00:17:24.788 "supported_io_types": { 00:17:24.788 "read": true, 00:17:24.788 "write": true, 00:17:24.788 "unmap": true, 00:17:24.788 "flush": true, 00:17:24.788 "reset": true, 00:17:24.788 "nvme_admin": false, 00:17:24.788 "nvme_io": false, 00:17:24.788 "nvme_io_md": false, 00:17:24.788 "write_zeroes": true, 00:17:24.788 "zcopy": true, 00:17:24.788 "get_zone_info": false, 00:17:24.788 "zone_management": false, 00:17:24.788 "zone_append": false, 00:17:24.788 "compare": false, 00:17:24.788 "compare_and_write": false, 00:17:24.788 "abort": true, 00:17:24.788 "seek_hole": false, 00:17:24.788 "seek_data": false, 00:17:24.788 "copy": true, 00:17:24.788 "nvme_iov_md": false 00:17:24.788 }, 00:17:24.788 "memory_domains": [ 00:17:24.788 { 00:17:24.788 "dma_device_id": "system", 00:17:24.788 "dma_device_type": 1 00:17:24.788 }, 00:17:24.788 { 00:17:24.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.788 "dma_device_type": 2 00:17:24.788 } 00:17:24.788 ], 00:17:24.788 "driver_specific": {} 00:17:24.788 } 00:17:24.788 ] 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.788 11:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.788 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.788 "name": "Existed_Raid", 00:17:24.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.788 "strip_size_kb": 64, 00:17:24.788 "state": "configuring", 00:17:24.788 "raid_level": "raid5f", 00:17:24.788 "superblock": false, 00:17:24.788 "num_base_bdevs": 4, 00:17:24.788 "num_base_bdevs_discovered": 1, 00:17:24.788 "num_base_bdevs_operational": 4, 00:17:24.788 "base_bdevs_list": [ 00:17:24.788 { 00:17:24.788 "name": "BaseBdev1", 00:17:24.788 "uuid": "9485648f-56a0-4b81-b29e-2fe0f9290976", 00:17:24.788 "is_configured": true, 00:17:24.788 "data_offset": 0, 00:17:24.788 "data_size": 65536 00:17:24.788 }, 00:17:24.788 { 00:17:24.788 "name": "BaseBdev2", 00:17:24.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.788 "is_configured": false, 00:17:24.788 "data_offset": 0, 00:17:24.788 "data_size": 0 00:17:24.788 }, 00:17:24.788 { 00:17:24.788 "name": "BaseBdev3", 00:17:24.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.788 "is_configured": false, 00:17:24.788 "data_offset": 0, 00:17:24.788 "data_size": 0 00:17:24.788 }, 00:17:24.788 { 00:17:24.788 "name": "BaseBdev4", 00:17:24.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.788 "is_configured": false, 00:17:24.788 "data_offset": 0, 00:17:24.788 "data_size": 0 00:17:24.788 } 00:17:24.788 ] 00:17:24.788 }' 00:17:24.788 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.788 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.357 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:25.357 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.357 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.357 [2024-11-05 11:33:24.400571] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:25.358 [2024-11-05 11:33:24.400613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.358 [2024-11-05 11:33:24.412612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.358 [2024-11-05 11:33:24.414388] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.358 [2024-11-05 11:33:24.414476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.358 [2024-11-05 11:33:24.414504] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:25.358 [2024-11-05 11:33:24.414527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:25.358 [2024-11-05 11:33:24.414545] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:25.358 [2024-11-05 11:33:24.414565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.358 "name": "Existed_Raid", 00:17:25.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.358 "strip_size_kb": 64, 00:17:25.358 "state": "configuring", 00:17:25.358 "raid_level": "raid5f", 00:17:25.358 "superblock": false, 00:17:25.358 "num_base_bdevs": 4, 00:17:25.358 "num_base_bdevs_discovered": 1, 00:17:25.358 "num_base_bdevs_operational": 4, 00:17:25.358 "base_bdevs_list": [ 00:17:25.358 { 00:17:25.358 "name": "BaseBdev1", 00:17:25.358 "uuid": "9485648f-56a0-4b81-b29e-2fe0f9290976", 00:17:25.358 "is_configured": true, 00:17:25.358 "data_offset": 0, 00:17:25.358 "data_size": 65536 00:17:25.358 }, 00:17:25.358 { 00:17:25.358 "name": "BaseBdev2", 00:17:25.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.358 "is_configured": false, 00:17:25.358 "data_offset": 0, 00:17:25.358 "data_size": 0 00:17:25.358 }, 00:17:25.358 { 00:17:25.358 "name": "BaseBdev3", 00:17:25.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.358 "is_configured": false, 00:17:25.358 "data_offset": 0, 00:17:25.358 "data_size": 0 00:17:25.358 }, 00:17:25.358 { 00:17:25.358 "name": "BaseBdev4", 00:17:25.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.358 "is_configured": false, 00:17:25.358 "data_offset": 0, 00:17:25.358 "data_size": 0 00:17:25.358 } 00:17:25.358 ] 00:17:25.358 }' 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.358 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.617 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:25.617 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.617 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.617 [2024-11-05 11:33:24.860835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.617 BaseBdev2 00:17:25.617 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.617 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:25.617 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:25.617 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:25.617 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:25.617 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:25.617 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:25.617 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:25.617 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.618 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.618 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.618 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:25.618 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.618 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.618 [ 00:17:25.618 { 00:17:25.618 "name": "BaseBdev2", 00:17:25.618 "aliases": [ 00:17:25.618 "12a929ba-64a0-47c0-9e68-a2e4caf5762b" 00:17:25.618 ], 00:17:25.618 "product_name": "Malloc disk", 00:17:25.618 "block_size": 512, 00:17:25.618 "num_blocks": 65536, 00:17:25.618 "uuid": "12a929ba-64a0-47c0-9e68-a2e4caf5762b", 00:17:25.618 "assigned_rate_limits": { 00:17:25.618 "rw_ios_per_sec": 0, 00:17:25.618 "rw_mbytes_per_sec": 0, 00:17:25.618 "r_mbytes_per_sec": 0, 00:17:25.618 "w_mbytes_per_sec": 0 00:17:25.618 }, 00:17:25.618 "claimed": true, 00:17:25.618 "claim_type": "exclusive_write", 00:17:25.618 "zoned": false, 00:17:25.618 "supported_io_types": { 00:17:25.618 "read": true, 00:17:25.618 "write": true, 00:17:25.618 "unmap": true, 00:17:25.618 "flush": true, 00:17:25.618 "reset": true, 00:17:25.618 "nvme_admin": false, 00:17:25.618 "nvme_io": false, 00:17:25.618 "nvme_io_md": false, 00:17:25.877 "write_zeroes": true, 00:17:25.877 "zcopy": true, 00:17:25.877 "get_zone_info": false, 00:17:25.877 "zone_management": false, 00:17:25.877 "zone_append": false, 00:17:25.877 "compare": false, 00:17:25.877 "compare_and_write": false, 00:17:25.877 "abort": true, 00:17:25.877 "seek_hole": false, 00:17:25.877 "seek_data": false, 00:17:25.877 "copy": true, 00:17:25.877 "nvme_iov_md": false 00:17:25.877 }, 00:17:25.877 "memory_domains": [ 00:17:25.877 { 00:17:25.877 "dma_device_id": "system", 00:17:25.877 "dma_device_type": 1 00:17:25.877 }, 00:17:25.877 { 00:17:25.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.877 "dma_device_type": 2 00:17:25.877 } 00:17:25.877 ], 00:17:25.877 "driver_specific": {} 00:17:25.877 } 00:17:25.877 ] 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.877 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.878 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.878 "name": "Existed_Raid", 00:17:25.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.878 "strip_size_kb": 64, 00:17:25.878 "state": "configuring", 00:17:25.878 "raid_level": "raid5f", 00:17:25.878 "superblock": false, 00:17:25.878 "num_base_bdevs": 4, 00:17:25.878 "num_base_bdevs_discovered": 2, 00:17:25.878 "num_base_bdevs_operational": 4, 00:17:25.878 "base_bdevs_list": [ 00:17:25.878 { 00:17:25.878 "name": "BaseBdev1", 00:17:25.878 "uuid": "9485648f-56a0-4b81-b29e-2fe0f9290976", 00:17:25.878 "is_configured": true, 00:17:25.878 "data_offset": 0, 00:17:25.878 "data_size": 65536 00:17:25.878 }, 00:17:25.878 { 00:17:25.878 "name": "BaseBdev2", 00:17:25.878 "uuid": "12a929ba-64a0-47c0-9e68-a2e4caf5762b", 00:17:25.878 "is_configured": true, 00:17:25.878 "data_offset": 0, 00:17:25.878 "data_size": 65536 00:17:25.878 }, 00:17:25.878 { 00:17:25.878 "name": "BaseBdev3", 00:17:25.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.878 "is_configured": false, 00:17:25.878 "data_offset": 0, 00:17:25.878 "data_size": 0 00:17:25.878 }, 00:17:25.878 { 00:17:25.878 "name": "BaseBdev4", 00:17:25.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.878 "is_configured": false, 00:17:25.878 "data_offset": 0, 00:17:25.878 "data_size": 0 00:17:25.878 } 00:17:25.878 ] 00:17:25.878 }' 00:17:25.878 11:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.878 11:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.137 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:26.137 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.137 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.397 [2024-11-05 11:33:25.452380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:26.397 BaseBdev3 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.397 [ 00:17:26.397 { 00:17:26.397 "name": "BaseBdev3", 00:17:26.397 "aliases": [ 00:17:26.397 "e239c10e-2368-4471-b72f-a1d4b566c92f" 00:17:26.397 ], 00:17:26.397 "product_name": "Malloc disk", 00:17:26.397 "block_size": 512, 00:17:26.397 "num_blocks": 65536, 00:17:26.397 "uuid": "e239c10e-2368-4471-b72f-a1d4b566c92f", 00:17:26.397 "assigned_rate_limits": { 00:17:26.397 "rw_ios_per_sec": 0, 00:17:26.397 "rw_mbytes_per_sec": 0, 00:17:26.397 "r_mbytes_per_sec": 0, 00:17:26.397 "w_mbytes_per_sec": 0 00:17:26.397 }, 00:17:26.397 "claimed": true, 00:17:26.397 "claim_type": "exclusive_write", 00:17:26.397 "zoned": false, 00:17:26.397 "supported_io_types": { 00:17:26.397 "read": true, 00:17:26.397 "write": true, 00:17:26.397 "unmap": true, 00:17:26.397 "flush": true, 00:17:26.397 "reset": true, 00:17:26.397 "nvme_admin": false, 00:17:26.397 "nvme_io": false, 00:17:26.397 "nvme_io_md": false, 00:17:26.397 "write_zeroes": true, 00:17:26.397 "zcopy": true, 00:17:26.397 "get_zone_info": false, 00:17:26.397 "zone_management": false, 00:17:26.397 "zone_append": false, 00:17:26.397 "compare": false, 00:17:26.397 "compare_and_write": false, 00:17:26.397 "abort": true, 00:17:26.397 "seek_hole": false, 00:17:26.397 "seek_data": false, 00:17:26.397 "copy": true, 00:17:26.397 "nvme_iov_md": false 00:17:26.397 }, 00:17:26.397 "memory_domains": [ 00:17:26.397 { 00:17:26.397 "dma_device_id": "system", 00:17:26.397 "dma_device_type": 1 00:17:26.397 }, 00:17:26.397 { 00:17:26.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.397 "dma_device_type": 2 00:17:26.397 } 00:17:26.397 ], 00:17:26.397 "driver_specific": {} 00:17:26.397 } 00:17:26.397 ] 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.397 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.398 "name": "Existed_Raid", 00:17:26.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.398 "strip_size_kb": 64, 00:17:26.398 "state": "configuring", 00:17:26.398 "raid_level": "raid5f", 00:17:26.398 "superblock": false, 00:17:26.398 "num_base_bdevs": 4, 00:17:26.398 "num_base_bdevs_discovered": 3, 00:17:26.398 "num_base_bdevs_operational": 4, 00:17:26.398 "base_bdevs_list": [ 00:17:26.398 { 00:17:26.398 "name": "BaseBdev1", 00:17:26.398 "uuid": "9485648f-56a0-4b81-b29e-2fe0f9290976", 00:17:26.398 "is_configured": true, 00:17:26.398 "data_offset": 0, 00:17:26.398 "data_size": 65536 00:17:26.398 }, 00:17:26.398 { 00:17:26.398 "name": "BaseBdev2", 00:17:26.398 "uuid": "12a929ba-64a0-47c0-9e68-a2e4caf5762b", 00:17:26.398 "is_configured": true, 00:17:26.398 "data_offset": 0, 00:17:26.398 "data_size": 65536 00:17:26.398 }, 00:17:26.398 { 00:17:26.398 "name": "BaseBdev3", 00:17:26.398 "uuid": "e239c10e-2368-4471-b72f-a1d4b566c92f", 00:17:26.398 "is_configured": true, 00:17:26.398 "data_offset": 0, 00:17:26.398 "data_size": 65536 00:17:26.398 }, 00:17:26.398 { 00:17:26.398 "name": "BaseBdev4", 00:17:26.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.398 "is_configured": false, 00:17:26.398 "data_offset": 0, 00:17:26.398 "data_size": 0 00:17:26.398 } 00:17:26.398 ] 00:17:26.398 }' 00:17:26.398 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.398 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.967 11:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:26.967 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.967 11:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.967 [2024-11-05 11:33:25.996186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:26.967 [2024-11-05 11:33:25.996325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:26.967 [2024-11-05 11:33:25.996340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:26.967 [2024-11-05 11:33:25.996616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:26.967 [2024-11-05 11:33:26.003450] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:26.967 [2024-11-05 11:33:26.003512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:26.967 [2024-11-05 11:33:26.003810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.967 BaseBdev4 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.967 [ 00:17:26.967 { 00:17:26.967 "name": "BaseBdev4", 00:17:26.967 "aliases": [ 00:17:26.967 "2a1ddb78-4101-462a-b053-276637786abe" 00:17:26.967 ], 00:17:26.967 "product_name": "Malloc disk", 00:17:26.967 "block_size": 512, 00:17:26.967 "num_blocks": 65536, 00:17:26.967 "uuid": "2a1ddb78-4101-462a-b053-276637786abe", 00:17:26.967 "assigned_rate_limits": { 00:17:26.967 "rw_ios_per_sec": 0, 00:17:26.967 "rw_mbytes_per_sec": 0, 00:17:26.967 "r_mbytes_per_sec": 0, 00:17:26.967 "w_mbytes_per_sec": 0 00:17:26.967 }, 00:17:26.967 "claimed": true, 00:17:26.967 "claim_type": "exclusive_write", 00:17:26.967 "zoned": false, 00:17:26.967 "supported_io_types": { 00:17:26.967 "read": true, 00:17:26.967 "write": true, 00:17:26.967 "unmap": true, 00:17:26.967 "flush": true, 00:17:26.967 "reset": true, 00:17:26.967 "nvme_admin": false, 00:17:26.967 "nvme_io": false, 00:17:26.967 "nvme_io_md": false, 00:17:26.967 "write_zeroes": true, 00:17:26.967 "zcopy": true, 00:17:26.967 "get_zone_info": false, 00:17:26.967 "zone_management": false, 00:17:26.967 "zone_append": false, 00:17:26.967 "compare": false, 00:17:26.967 "compare_and_write": false, 00:17:26.967 "abort": true, 00:17:26.967 "seek_hole": false, 00:17:26.967 "seek_data": false, 00:17:26.967 "copy": true, 00:17:26.967 "nvme_iov_md": false 00:17:26.967 }, 00:17:26.967 "memory_domains": [ 00:17:26.967 { 00:17:26.967 "dma_device_id": "system", 00:17:26.967 "dma_device_type": 1 00:17:26.967 }, 00:17:26.967 { 00:17:26.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.967 "dma_device_type": 2 00:17:26.967 } 00:17:26.967 ], 00:17:26.967 "driver_specific": {} 00:17:26.967 } 00:17:26.967 ] 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.967 "name": "Existed_Raid", 00:17:26.967 "uuid": "50c233ce-f29e-4a2b-82ac-5f760b226ef3", 00:17:26.967 "strip_size_kb": 64, 00:17:26.967 "state": "online", 00:17:26.967 "raid_level": "raid5f", 00:17:26.967 "superblock": false, 00:17:26.967 "num_base_bdevs": 4, 00:17:26.967 "num_base_bdevs_discovered": 4, 00:17:26.967 "num_base_bdevs_operational": 4, 00:17:26.967 "base_bdevs_list": [ 00:17:26.967 { 00:17:26.967 "name": "BaseBdev1", 00:17:26.967 "uuid": "9485648f-56a0-4b81-b29e-2fe0f9290976", 00:17:26.967 "is_configured": true, 00:17:26.967 "data_offset": 0, 00:17:26.967 "data_size": 65536 00:17:26.967 }, 00:17:26.967 { 00:17:26.967 "name": "BaseBdev2", 00:17:26.967 "uuid": "12a929ba-64a0-47c0-9e68-a2e4caf5762b", 00:17:26.967 "is_configured": true, 00:17:26.967 "data_offset": 0, 00:17:26.967 "data_size": 65536 00:17:26.967 }, 00:17:26.967 { 00:17:26.967 "name": "BaseBdev3", 00:17:26.967 "uuid": "e239c10e-2368-4471-b72f-a1d4b566c92f", 00:17:26.967 "is_configured": true, 00:17:26.967 "data_offset": 0, 00:17:26.967 "data_size": 65536 00:17:26.967 }, 00:17:26.967 { 00:17:26.967 "name": "BaseBdev4", 00:17:26.967 "uuid": "2a1ddb78-4101-462a-b053-276637786abe", 00:17:26.967 "is_configured": true, 00:17:26.967 "data_offset": 0, 00:17:26.967 "data_size": 65536 00:17:26.967 } 00:17:26.967 ] 00:17:26.967 }' 00:17:26.967 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.968 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.227 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:27.227 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:27.227 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:27.227 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:27.227 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:27.227 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:27.486 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:27.486 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:27.486 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.486 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.486 [2024-11-05 11:33:26.511185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.486 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.486 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:27.486 "name": "Existed_Raid", 00:17:27.486 "aliases": [ 00:17:27.486 "50c233ce-f29e-4a2b-82ac-5f760b226ef3" 00:17:27.486 ], 00:17:27.486 "product_name": "Raid Volume", 00:17:27.486 "block_size": 512, 00:17:27.486 "num_blocks": 196608, 00:17:27.486 "uuid": "50c233ce-f29e-4a2b-82ac-5f760b226ef3", 00:17:27.486 "assigned_rate_limits": { 00:17:27.486 "rw_ios_per_sec": 0, 00:17:27.486 "rw_mbytes_per_sec": 0, 00:17:27.486 "r_mbytes_per_sec": 0, 00:17:27.486 "w_mbytes_per_sec": 0 00:17:27.486 }, 00:17:27.486 "claimed": false, 00:17:27.486 "zoned": false, 00:17:27.486 "supported_io_types": { 00:17:27.486 "read": true, 00:17:27.486 "write": true, 00:17:27.486 "unmap": false, 00:17:27.486 "flush": false, 00:17:27.486 "reset": true, 00:17:27.486 "nvme_admin": false, 00:17:27.486 "nvme_io": false, 00:17:27.487 "nvme_io_md": false, 00:17:27.487 "write_zeroes": true, 00:17:27.487 "zcopy": false, 00:17:27.487 "get_zone_info": false, 00:17:27.487 "zone_management": false, 00:17:27.487 "zone_append": false, 00:17:27.487 "compare": false, 00:17:27.487 "compare_and_write": false, 00:17:27.487 "abort": false, 00:17:27.487 "seek_hole": false, 00:17:27.487 "seek_data": false, 00:17:27.487 "copy": false, 00:17:27.487 "nvme_iov_md": false 00:17:27.487 }, 00:17:27.487 "driver_specific": { 00:17:27.487 "raid": { 00:17:27.487 "uuid": "50c233ce-f29e-4a2b-82ac-5f760b226ef3", 00:17:27.487 "strip_size_kb": 64, 00:17:27.487 "state": "online", 00:17:27.487 "raid_level": "raid5f", 00:17:27.487 "superblock": false, 00:17:27.487 "num_base_bdevs": 4, 00:17:27.487 "num_base_bdevs_discovered": 4, 00:17:27.487 "num_base_bdevs_operational": 4, 00:17:27.487 "base_bdevs_list": [ 00:17:27.487 { 00:17:27.487 "name": "BaseBdev1", 00:17:27.487 "uuid": "9485648f-56a0-4b81-b29e-2fe0f9290976", 00:17:27.487 "is_configured": true, 00:17:27.487 "data_offset": 0, 00:17:27.487 "data_size": 65536 00:17:27.487 }, 00:17:27.487 { 00:17:27.487 "name": "BaseBdev2", 00:17:27.487 "uuid": "12a929ba-64a0-47c0-9e68-a2e4caf5762b", 00:17:27.487 "is_configured": true, 00:17:27.487 "data_offset": 0, 00:17:27.487 "data_size": 65536 00:17:27.487 }, 00:17:27.487 { 00:17:27.487 "name": "BaseBdev3", 00:17:27.487 "uuid": "e239c10e-2368-4471-b72f-a1d4b566c92f", 00:17:27.487 "is_configured": true, 00:17:27.487 "data_offset": 0, 00:17:27.487 "data_size": 65536 00:17:27.487 }, 00:17:27.487 { 00:17:27.487 "name": "BaseBdev4", 00:17:27.487 "uuid": "2a1ddb78-4101-462a-b053-276637786abe", 00:17:27.487 "is_configured": true, 00:17:27.487 "data_offset": 0, 00:17:27.487 "data_size": 65536 00:17:27.487 } 00:17:27.487 ] 00:17:27.487 } 00:17:27.487 } 00:17:27.487 }' 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:27.487 BaseBdev2 00:17:27.487 BaseBdev3 00:17:27.487 BaseBdev4' 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.487 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.747 [2024-11-05 11:33:26.858420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.747 11:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.747 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.747 "name": "Existed_Raid", 00:17:27.747 "uuid": "50c233ce-f29e-4a2b-82ac-5f760b226ef3", 00:17:27.747 "strip_size_kb": 64, 00:17:27.747 "state": "online", 00:17:27.747 "raid_level": "raid5f", 00:17:27.747 "superblock": false, 00:17:27.747 "num_base_bdevs": 4, 00:17:27.747 "num_base_bdevs_discovered": 3, 00:17:27.747 "num_base_bdevs_operational": 3, 00:17:27.747 "base_bdevs_list": [ 00:17:27.747 { 00:17:27.747 "name": null, 00:17:27.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.747 "is_configured": false, 00:17:27.747 "data_offset": 0, 00:17:27.747 "data_size": 65536 00:17:27.747 }, 00:17:27.747 { 00:17:27.747 "name": "BaseBdev2", 00:17:27.747 "uuid": "12a929ba-64a0-47c0-9e68-a2e4caf5762b", 00:17:27.747 "is_configured": true, 00:17:27.747 "data_offset": 0, 00:17:27.747 "data_size": 65536 00:17:27.747 }, 00:17:27.747 { 00:17:27.747 "name": "BaseBdev3", 00:17:27.747 "uuid": "e239c10e-2368-4471-b72f-a1d4b566c92f", 00:17:27.747 "is_configured": true, 00:17:27.747 "data_offset": 0, 00:17:27.747 "data_size": 65536 00:17:27.747 }, 00:17:27.747 { 00:17:27.747 "name": "BaseBdev4", 00:17:27.747 "uuid": "2a1ddb78-4101-462a-b053-276637786abe", 00:17:27.747 "is_configured": true, 00:17:27.747 "data_offset": 0, 00:17:27.747 "data_size": 65536 00:17:27.747 } 00:17:27.747 ] 00:17:27.747 }' 00:17:27.747 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.747 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.316 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:28.316 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:28.316 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.316 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:28.316 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.316 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.316 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.316 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:28.316 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.316 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:28.316 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.316 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.316 [2024-11-05 11:33:27.429776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:28.316 [2024-11-05 11:33:27.429878] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.316 [2024-11-05 11:33:27.518062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.316 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.316 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:28.317 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:28.317 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.317 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.317 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:28.317 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.317 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:28.317 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.317 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:28.317 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.317 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 [2024-11-05 11:33:27.577973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.576 [2024-11-05 11:33:27.727772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:28.576 [2024-11-05 11:33:27.727821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.576 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.836 BaseBdev2 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:28.836 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.837 [ 00:17:28.837 { 00:17:28.837 "name": "BaseBdev2", 00:17:28.837 "aliases": [ 00:17:28.837 "3cc2dfd4-4a28-45ac-8086-42b28a20a441" 00:17:28.837 ], 00:17:28.837 "product_name": "Malloc disk", 00:17:28.837 "block_size": 512, 00:17:28.837 "num_blocks": 65536, 00:17:28.837 "uuid": "3cc2dfd4-4a28-45ac-8086-42b28a20a441", 00:17:28.837 "assigned_rate_limits": { 00:17:28.837 "rw_ios_per_sec": 0, 00:17:28.837 "rw_mbytes_per_sec": 0, 00:17:28.837 "r_mbytes_per_sec": 0, 00:17:28.837 "w_mbytes_per_sec": 0 00:17:28.837 }, 00:17:28.837 "claimed": false, 00:17:28.837 "zoned": false, 00:17:28.837 "supported_io_types": { 00:17:28.837 "read": true, 00:17:28.837 "write": true, 00:17:28.837 "unmap": true, 00:17:28.837 "flush": true, 00:17:28.837 "reset": true, 00:17:28.837 "nvme_admin": false, 00:17:28.837 "nvme_io": false, 00:17:28.837 "nvme_io_md": false, 00:17:28.837 "write_zeroes": true, 00:17:28.837 "zcopy": true, 00:17:28.837 "get_zone_info": false, 00:17:28.837 "zone_management": false, 00:17:28.837 "zone_append": false, 00:17:28.837 "compare": false, 00:17:28.837 "compare_and_write": false, 00:17:28.837 "abort": true, 00:17:28.837 "seek_hole": false, 00:17:28.837 "seek_data": false, 00:17:28.837 "copy": true, 00:17:28.837 "nvme_iov_md": false 00:17:28.837 }, 00:17:28.837 "memory_domains": [ 00:17:28.837 { 00:17:28.837 "dma_device_id": "system", 00:17:28.837 "dma_device_type": 1 00:17:28.837 }, 00:17:28.837 { 00:17:28.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.837 "dma_device_type": 2 00:17:28.837 } 00:17:28.837 ], 00:17:28.837 "driver_specific": {} 00:17:28.837 } 00:17:28.837 ] 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.837 BaseBdev3 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.837 11:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.837 [ 00:17:28.837 { 00:17:28.837 "name": "BaseBdev3", 00:17:28.837 "aliases": [ 00:17:28.837 "3b797dd8-ff63-4a48-bd8b-18e193a03447" 00:17:28.837 ], 00:17:28.837 "product_name": "Malloc disk", 00:17:28.837 "block_size": 512, 00:17:28.837 "num_blocks": 65536, 00:17:28.837 "uuid": "3b797dd8-ff63-4a48-bd8b-18e193a03447", 00:17:28.837 "assigned_rate_limits": { 00:17:28.837 "rw_ios_per_sec": 0, 00:17:28.837 "rw_mbytes_per_sec": 0, 00:17:28.837 "r_mbytes_per_sec": 0, 00:17:28.837 "w_mbytes_per_sec": 0 00:17:28.837 }, 00:17:28.837 "claimed": false, 00:17:28.837 "zoned": false, 00:17:28.837 "supported_io_types": { 00:17:28.837 "read": true, 00:17:28.837 "write": true, 00:17:28.837 "unmap": true, 00:17:28.837 "flush": true, 00:17:28.837 "reset": true, 00:17:28.837 "nvme_admin": false, 00:17:28.837 "nvme_io": false, 00:17:28.837 "nvme_io_md": false, 00:17:28.837 "write_zeroes": true, 00:17:28.837 "zcopy": true, 00:17:28.837 "get_zone_info": false, 00:17:28.837 "zone_management": false, 00:17:28.837 "zone_append": false, 00:17:28.837 "compare": false, 00:17:28.837 "compare_and_write": false, 00:17:28.837 "abort": true, 00:17:28.837 "seek_hole": false, 00:17:28.837 "seek_data": false, 00:17:28.837 "copy": true, 00:17:28.837 "nvme_iov_md": false 00:17:28.837 }, 00:17:28.837 "memory_domains": [ 00:17:28.837 { 00:17:28.837 "dma_device_id": "system", 00:17:28.837 "dma_device_type": 1 00:17:28.837 }, 00:17:28.837 { 00:17:28.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.837 "dma_device_type": 2 00:17:28.837 } 00:17:28.837 ], 00:17:28.837 "driver_specific": {} 00:17:28.837 } 00:17:28.837 ] 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.837 BaseBdev4 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.837 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.837 [ 00:17:28.837 { 00:17:28.837 "name": "BaseBdev4", 00:17:28.837 "aliases": [ 00:17:28.837 "dde3359b-4933-4284-8552-3119ae36a7c7" 00:17:28.837 ], 00:17:28.837 "product_name": "Malloc disk", 00:17:28.837 "block_size": 512, 00:17:28.837 "num_blocks": 65536, 00:17:28.837 "uuid": "dde3359b-4933-4284-8552-3119ae36a7c7", 00:17:28.837 "assigned_rate_limits": { 00:17:28.837 "rw_ios_per_sec": 0, 00:17:28.837 "rw_mbytes_per_sec": 0, 00:17:28.837 "r_mbytes_per_sec": 0, 00:17:28.837 "w_mbytes_per_sec": 0 00:17:28.837 }, 00:17:28.837 "claimed": false, 00:17:28.837 "zoned": false, 00:17:28.837 "supported_io_types": { 00:17:28.837 "read": true, 00:17:28.837 "write": true, 00:17:28.837 "unmap": true, 00:17:28.837 "flush": true, 00:17:28.837 "reset": true, 00:17:28.837 "nvme_admin": false, 00:17:28.837 "nvme_io": false, 00:17:28.837 "nvme_io_md": false, 00:17:28.837 "write_zeroes": true, 00:17:28.837 "zcopy": true, 00:17:28.837 "get_zone_info": false, 00:17:28.837 "zone_management": false, 00:17:28.837 "zone_append": false, 00:17:28.837 "compare": false, 00:17:28.837 "compare_and_write": false, 00:17:28.837 "abort": true, 00:17:28.837 "seek_hole": false, 00:17:28.837 "seek_data": false, 00:17:28.837 "copy": true, 00:17:28.837 "nvme_iov_md": false 00:17:28.837 }, 00:17:28.837 "memory_domains": [ 00:17:28.837 { 00:17:28.838 "dma_device_id": "system", 00:17:28.838 "dma_device_type": 1 00:17:28.838 }, 00:17:28.838 { 00:17:28.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.838 "dma_device_type": 2 00:17:28.838 } 00:17:28.838 ], 00:17:28.838 "driver_specific": {} 00:17:28.838 } 00:17:28.838 ] 00:17:28.838 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.838 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:28.838 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:28.838 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:28.838 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:28.838 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.838 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.097 [2024-11-05 11:33:28.112751] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:29.097 [2024-11-05 11:33:28.112836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:29.097 [2024-11-05 11:33:28.112878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:29.097 [2024-11-05 11:33:28.114624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:29.097 [2024-11-05 11:33:28.114738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.097 "name": "Existed_Raid", 00:17:29.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.097 "strip_size_kb": 64, 00:17:29.097 "state": "configuring", 00:17:29.097 "raid_level": "raid5f", 00:17:29.097 "superblock": false, 00:17:29.097 "num_base_bdevs": 4, 00:17:29.097 "num_base_bdevs_discovered": 3, 00:17:29.097 "num_base_bdevs_operational": 4, 00:17:29.097 "base_bdevs_list": [ 00:17:29.097 { 00:17:29.097 "name": "BaseBdev1", 00:17:29.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.097 "is_configured": false, 00:17:29.097 "data_offset": 0, 00:17:29.097 "data_size": 0 00:17:29.097 }, 00:17:29.097 { 00:17:29.097 "name": "BaseBdev2", 00:17:29.097 "uuid": "3cc2dfd4-4a28-45ac-8086-42b28a20a441", 00:17:29.097 "is_configured": true, 00:17:29.097 "data_offset": 0, 00:17:29.097 "data_size": 65536 00:17:29.097 }, 00:17:29.097 { 00:17:29.097 "name": "BaseBdev3", 00:17:29.097 "uuid": "3b797dd8-ff63-4a48-bd8b-18e193a03447", 00:17:29.097 "is_configured": true, 00:17:29.097 "data_offset": 0, 00:17:29.097 "data_size": 65536 00:17:29.097 }, 00:17:29.097 { 00:17:29.097 "name": "BaseBdev4", 00:17:29.097 "uuid": "dde3359b-4933-4284-8552-3119ae36a7c7", 00:17:29.097 "is_configured": true, 00:17:29.097 "data_offset": 0, 00:17:29.097 "data_size": 65536 00:17:29.097 } 00:17:29.097 ] 00:17:29.097 }' 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.097 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.357 [2024-11-05 11:33:28.571947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.357 "name": "Existed_Raid", 00:17:29.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.357 "strip_size_kb": 64, 00:17:29.357 "state": "configuring", 00:17:29.357 "raid_level": "raid5f", 00:17:29.357 "superblock": false, 00:17:29.357 "num_base_bdevs": 4, 00:17:29.357 "num_base_bdevs_discovered": 2, 00:17:29.357 "num_base_bdevs_operational": 4, 00:17:29.357 "base_bdevs_list": [ 00:17:29.357 { 00:17:29.357 "name": "BaseBdev1", 00:17:29.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.357 "is_configured": false, 00:17:29.357 "data_offset": 0, 00:17:29.357 "data_size": 0 00:17:29.357 }, 00:17:29.357 { 00:17:29.357 "name": null, 00:17:29.357 "uuid": "3cc2dfd4-4a28-45ac-8086-42b28a20a441", 00:17:29.357 "is_configured": false, 00:17:29.357 "data_offset": 0, 00:17:29.357 "data_size": 65536 00:17:29.357 }, 00:17:29.357 { 00:17:29.357 "name": "BaseBdev3", 00:17:29.357 "uuid": "3b797dd8-ff63-4a48-bd8b-18e193a03447", 00:17:29.357 "is_configured": true, 00:17:29.357 "data_offset": 0, 00:17:29.357 "data_size": 65536 00:17:29.357 }, 00:17:29.357 { 00:17:29.357 "name": "BaseBdev4", 00:17:29.357 "uuid": "dde3359b-4933-4284-8552-3119ae36a7c7", 00:17:29.357 "is_configured": true, 00:17:29.357 "data_offset": 0, 00:17:29.357 "data_size": 65536 00:17:29.357 } 00:17:29.357 ] 00:17:29.357 }' 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.357 11:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.926 [2024-11-05 11:33:29.114647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.926 BaseBdev1 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.926 [ 00:17:29.926 { 00:17:29.926 "name": "BaseBdev1", 00:17:29.926 "aliases": [ 00:17:29.926 "2bcaa17c-21d7-40d0-b7c9-993235ac186e" 00:17:29.926 ], 00:17:29.926 "product_name": "Malloc disk", 00:17:29.926 "block_size": 512, 00:17:29.926 "num_blocks": 65536, 00:17:29.926 "uuid": "2bcaa17c-21d7-40d0-b7c9-993235ac186e", 00:17:29.926 "assigned_rate_limits": { 00:17:29.926 "rw_ios_per_sec": 0, 00:17:29.926 "rw_mbytes_per_sec": 0, 00:17:29.926 "r_mbytes_per_sec": 0, 00:17:29.926 "w_mbytes_per_sec": 0 00:17:29.926 }, 00:17:29.926 "claimed": true, 00:17:29.926 "claim_type": "exclusive_write", 00:17:29.926 "zoned": false, 00:17:29.926 "supported_io_types": { 00:17:29.926 "read": true, 00:17:29.926 "write": true, 00:17:29.926 "unmap": true, 00:17:29.926 "flush": true, 00:17:29.926 "reset": true, 00:17:29.926 "nvme_admin": false, 00:17:29.926 "nvme_io": false, 00:17:29.926 "nvme_io_md": false, 00:17:29.926 "write_zeroes": true, 00:17:29.926 "zcopy": true, 00:17:29.926 "get_zone_info": false, 00:17:29.926 "zone_management": false, 00:17:29.926 "zone_append": false, 00:17:29.926 "compare": false, 00:17:29.926 "compare_and_write": false, 00:17:29.926 "abort": true, 00:17:29.926 "seek_hole": false, 00:17:29.926 "seek_data": false, 00:17:29.926 "copy": true, 00:17:29.926 "nvme_iov_md": false 00:17:29.926 }, 00:17:29.926 "memory_domains": [ 00:17:29.926 { 00:17:29.926 "dma_device_id": "system", 00:17:29.926 "dma_device_type": 1 00:17:29.926 }, 00:17:29.926 { 00:17:29.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.926 "dma_device_type": 2 00:17:29.926 } 00:17:29.926 ], 00:17:29.926 "driver_specific": {} 00:17:29.926 } 00:17:29.926 ] 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.926 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.186 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.186 "name": "Existed_Raid", 00:17:30.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.186 "strip_size_kb": 64, 00:17:30.186 "state": "configuring", 00:17:30.186 "raid_level": "raid5f", 00:17:30.186 "superblock": false, 00:17:30.186 "num_base_bdevs": 4, 00:17:30.186 "num_base_bdevs_discovered": 3, 00:17:30.186 "num_base_bdevs_operational": 4, 00:17:30.186 "base_bdevs_list": [ 00:17:30.186 { 00:17:30.186 "name": "BaseBdev1", 00:17:30.186 "uuid": "2bcaa17c-21d7-40d0-b7c9-993235ac186e", 00:17:30.186 "is_configured": true, 00:17:30.186 "data_offset": 0, 00:17:30.186 "data_size": 65536 00:17:30.186 }, 00:17:30.186 { 00:17:30.186 "name": null, 00:17:30.186 "uuid": "3cc2dfd4-4a28-45ac-8086-42b28a20a441", 00:17:30.186 "is_configured": false, 00:17:30.186 "data_offset": 0, 00:17:30.186 "data_size": 65536 00:17:30.186 }, 00:17:30.186 { 00:17:30.186 "name": "BaseBdev3", 00:17:30.186 "uuid": "3b797dd8-ff63-4a48-bd8b-18e193a03447", 00:17:30.186 "is_configured": true, 00:17:30.186 "data_offset": 0, 00:17:30.186 "data_size": 65536 00:17:30.186 }, 00:17:30.186 { 00:17:30.186 "name": "BaseBdev4", 00:17:30.186 "uuid": "dde3359b-4933-4284-8552-3119ae36a7c7", 00:17:30.186 "is_configured": true, 00:17:30.186 "data_offset": 0, 00:17:30.186 "data_size": 65536 00:17:30.186 } 00:17:30.186 ] 00:17:30.186 }' 00:17:30.186 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.186 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.445 [2024-11-05 11:33:29.669742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.445 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.705 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.705 "name": "Existed_Raid", 00:17:30.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.705 "strip_size_kb": 64, 00:17:30.705 "state": "configuring", 00:17:30.705 "raid_level": "raid5f", 00:17:30.705 "superblock": false, 00:17:30.705 "num_base_bdevs": 4, 00:17:30.705 "num_base_bdevs_discovered": 2, 00:17:30.705 "num_base_bdevs_operational": 4, 00:17:30.705 "base_bdevs_list": [ 00:17:30.705 { 00:17:30.705 "name": "BaseBdev1", 00:17:30.705 "uuid": "2bcaa17c-21d7-40d0-b7c9-993235ac186e", 00:17:30.705 "is_configured": true, 00:17:30.705 "data_offset": 0, 00:17:30.705 "data_size": 65536 00:17:30.705 }, 00:17:30.705 { 00:17:30.705 "name": null, 00:17:30.705 "uuid": "3cc2dfd4-4a28-45ac-8086-42b28a20a441", 00:17:30.705 "is_configured": false, 00:17:30.705 "data_offset": 0, 00:17:30.705 "data_size": 65536 00:17:30.705 }, 00:17:30.705 { 00:17:30.705 "name": null, 00:17:30.705 "uuid": "3b797dd8-ff63-4a48-bd8b-18e193a03447", 00:17:30.705 "is_configured": false, 00:17:30.705 "data_offset": 0, 00:17:30.705 "data_size": 65536 00:17:30.705 }, 00:17:30.705 { 00:17:30.705 "name": "BaseBdev4", 00:17:30.705 "uuid": "dde3359b-4933-4284-8552-3119ae36a7c7", 00:17:30.705 "is_configured": true, 00:17:30.705 "data_offset": 0, 00:17:30.705 "data_size": 65536 00:17:30.705 } 00:17:30.705 ] 00:17:30.705 }' 00:17:30.705 11:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.705 11:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.963 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.963 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:30.963 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.964 [2024-11-05 11:33:30.093030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.964 "name": "Existed_Raid", 00:17:30.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.964 "strip_size_kb": 64, 00:17:30.964 "state": "configuring", 00:17:30.964 "raid_level": "raid5f", 00:17:30.964 "superblock": false, 00:17:30.964 "num_base_bdevs": 4, 00:17:30.964 "num_base_bdevs_discovered": 3, 00:17:30.964 "num_base_bdevs_operational": 4, 00:17:30.964 "base_bdevs_list": [ 00:17:30.964 { 00:17:30.964 "name": "BaseBdev1", 00:17:30.964 "uuid": "2bcaa17c-21d7-40d0-b7c9-993235ac186e", 00:17:30.964 "is_configured": true, 00:17:30.964 "data_offset": 0, 00:17:30.964 "data_size": 65536 00:17:30.964 }, 00:17:30.964 { 00:17:30.964 "name": null, 00:17:30.964 "uuid": "3cc2dfd4-4a28-45ac-8086-42b28a20a441", 00:17:30.964 "is_configured": false, 00:17:30.964 "data_offset": 0, 00:17:30.964 "data_size": 65536 00:17:30.964 }, 00:17:30.964 { 00:17:30.964 "name": "BaseBdev3", 00:17:30.964 "uuid": "3b797dd8-ff63-4a48-bd8b-18e193a03447", 00:17:30.964 "is_configured": true, 00:17:30.964 "data_offset": 0, 00:17:30.964 "data_size": 65536 00:17:30.964 }, 00:17:30.964 { 00:17:30.964 "name": "BaseBdev4", 00:17:30.964 "uuid": "dde3359b-4933-4284-8552-3119ae36a7c7", 00:17:30.964 "is_configured": true, 00:17:30.964 "data_offset": 0, 00:17:30.964 "data_size": 65536 00:17:30.964 } 00:17:30.964 ] 00:17:30.964 }' 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.964 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.533 [2024-11-05 11:33:30.588245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.533 "name": "Existed_Raid", 00:17:31.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.533 "strip_size_kb": 64, 00:17:31.533 "state": "configuring", 00:17:31.533 "raid_level": "raid5f", 00:17:31.533 "superblock": false, 00:17:31.533 "num_base_bdevs": 4, 00:17:31.533 "num_base_bdevs_discovered": 2, 00:17:31.533 "num_base_bdevs_operational": 4, 00:17:31.533 "base_bdevs_list": [ 00:17:31.533 { 00:17:31.533 "name": null, 00:17:31.533 "uuid": "2bcaa17c-21d7-40d0-b7c9-993235ac186e", 00:17:31.533 "is_configured": false, 00:17:31.533 "data_offset": 0, 00:17:31.533 "data_size": 65536 00:17:31.533 }, 00:17:31.533 { 00:17:31.533 "name": null, 00:17:31.533 "uuid": "3cc2dfd4-4a28-45ac-8086-42b28a20a441", 00:17:31.533 "is_configured": false, 00:17:31.533 "data_offset": 0, 00:17:31.533 "data_size": 65536 00:17:31.533 }, 00:17:31.533 { 00:17:31.533 "name": "BaseBdev3", 00:17:31.533 "uuid": "3b797dd8-ff63-4a48-bd8b-18e193a03447", 00:17:31.533 "is_configured": true, 00:17:31.533 "data_offset": 0, 00:17:31.533 "data_size": 65536 00:17:31.533 }, 00:17:31.533 { 00:17:31.533 "name": "BaseBdev4", 00:17:31.533 "uuid": "dde3359b-4933-4284-8552-3119ae36a7c7", 00:17:31.533 "is_configured": true, 00:17:31.533 "data_offset": 0, 00:17:31.533 "data_size": 65536 00:17:31.533 } 00:17:31.533 ] 00:17:31.533 }' 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.533 11:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.102 [2024-11-05 11:33:31.133406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.102 "name": "Existed_Raid", 00:17:32.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.102 "strip_size_kb": 64, 00:17:32.102 "state": "configuring", 00:17:32.102 "raid_level": "raid5f", 00:17:32.102 "superblock": false, 00:17:32.102 "num_base_bdevs": 4, 00:17:32.102 "num_base_bdevs_discovered": 3, 00:17:32.102 "num_base_bdevs_operational": 4, 00:17:32.102 "base_bdevs_list": [ 00:17:32.102 { 00:17:32.102 "name": null, 00:17:32.102 "uuid": "2bcaa17c-21d7-40d0-b7c9-993235ac186e", 00:17:32.102 "is_configured": false, 00:17:32.102 "data_offset": 0, 00:17:32.102 "data_size": 65536 00:17:32.102 }, 00:17:32.102 { 00:17:32.102 "name": "BaseBdev2", 00:17:32.102 "uuid": "3cc2dfd4-4a28-45ac-8086-42b28a20a441", 00:17:32.102 "is_configured": true, 00:17:32.102 "data_offset": 0, 00:17:32.102 "data_size": 65536 00:17:32.102 }, 00:17:32.102 { 00:17:32.102 "name": "BaseBdev3", 00:17:32.102 "uuid": "3b797dd8-ff63-4a48-bd8b-18e193a03447", 00:17:32.102 "is_configured": true, 00:17:32.102 "data_offset": 0, 00:17:32.102 "data_size": 65536 00:17:32.102 }, 00:17:32.102 { 00:17:32.102 "name": "BaseBdev4", 00:17:32.102 "uuid": "dde3359b-4933-4284-8552-3119ae36a7c7", 00:17:32.102 "is_configured": true, 00:17:32.102 "data_offset": 0, 00:17:32.102 "data_size": 65536 00:17:32.102 } 00:17:32.102 ] 00:17:32.102 }' 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.102 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2bcaa17c-21d7-40d0-b7c9-993235ac186e 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.671 [2024-11-05 11:33:31.783349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:32.671 [2024-11-05 11:33:31.783399] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:32.671 [2024-11-05 11:33:31.783407] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:32.671 [2024-11-05 11:33:31.783647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:32.671 [2024-11-05 11:33:31.790316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:32.671 [2024-11-05 11:33:31.790400] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:32.671 [2024-11-05 11:33:31.790679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.671 NewBaseBdev 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.671 [ 00:17:32.671 { 00:17:32.671 "name": "NewBaseBdev", 00:17:32.671 "aliases": [ 00:17:32.671 "2bcaa17c-21d7-40d0-b7c9-993235ac186e" 00:17:32.671 ], 00:17:32.671 "product_name": "Malloc disk", 00:17:32.671 "block_size": 512, 00:17:32.671 "num_blocks": 65536, 00:17:32.671 "uuid": "2bcaa17c-21d7-40d0-b7c9-993235ac186e", 00:17:32.671 "assigned_rate_limits": { 00:17:32.671 "rw_ios_per_sec": 0, 00:17:32.671 "rw_mbytes_per_sec": 0, 00:17:32.671 "r_mbytes_per_sec": 0, 00:17:32.671 "w_mbytes_per_sec": 0 00:17:32.671 }, 00:17:32.671 "claimed": true, 00:17:32.671 "claim_type": "exclusive_write", 00:17:32.671 "zoned": false, 00:17:32.671 "supported_io_types": { 00:17:32.671 "read": true, 00:17:32.671 "write": true, 00:17:32.671 "unmap": true, 00:17:32.671 "flush": true, 00:17:32.671 "reset": true, 00:17:32.671 "nvme_admin": false, 00:17:32.671 "nvme_io": false, 00:17:32.671 "nvme_io_md": false, 00:17:32.671 "write_zeroes": true, 00:17:32.671 "zcopy": true, 00:17:32.671 "get_zone_info": false, 00:17:32.671 "zone_management": false, 00:17:32.671 "zone_append": false, 00:17:32.671 "compare": false, 00:17:32.671 "compare_and_write": false, 00:17:32.671 "abort": true, 00:17:32.671 "seek_hole": false, 00:17:32.671 "seek_data": false, 00:17:32.671 "copy": true, 00:17:32.671 "nvme_iov_md": false 00:17:32.671 }, 00:17:32.671 "memory_domains": [ 00:17:32.671 { 00:17:32.671 "dma_device_id": "system", 00:17:32.671 "dma_device_type": 1 00:17:32.671 }, 00:17:32.671 { 00:17:32.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.671 "dma_device_type": 2 00:17:32.671 } 00:17:32.671 ], 00:17:32.671 "driver_specific": {} 00:17:32.671 } 00:17:32.671 ] 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.671 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.672 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.672 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.672 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.672 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.672 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.672 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.672 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.672 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.672 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.672 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.672 "name": "Existed_Raid", 00:17:32.672 "uuid": "77d3c65e-dc85-4c1f-9e26-70d9ca52c47f", 00:17:32.672 "strip_size_kb": 64, 00:17:32.672 "state": "online", 00:17:32.672 "raid_level": "raid5f", 00:17:32.672 "superblock": false, 00:17:32.672 "num_base_bdevs": 4, 00:17:32.672 "num_base_bdevs_discovered": 4, 00:17:32.672 "num_base_bdevs_operational": 4, 00:17:32.672 "base_bdevs_list": [ 00:17:32.672 { 00:17:32.672 "name": "NewBaseBdev", 00:17:32.672 "uuid": "2bcaa17c-21d7-40d0-b7c9-993235ac186e", 00:17:32.672 "is_configured": true, 00:17:32.672 "data_offset": 0, 00:17:32.672 "data_size": 65536 00:17:32.672 }, 00:17:32.672 { 00:17:32.672 "name": "BaseBdev2", 00:17:32.672 "uuid": "3cc2dfd4-4a28-45ac-8086-42b28a20a441", 00:17:32.672 "is_configured": true, 00:17:32.672 "data_offset": 0, 00:17:32.672 "data_size": 65536 00:17:32.672 }, 00:17:32.672 { 00:17:32.672 "name": "BaseBdev3", 00:17:32.672 "uuid": "3b797dd8-ff63-4a48-bd8b-18e193a03447", 00:17:32.672 "is_configured": true, 00:17:32.672 "data_offset": 0, 00:17:32.672 "data_size": 65536 00:17:32.672 }, 00:17:32.672 { 00:17:32.672 "name": "BaseBdev4", 00:17:32.672 "uuid": "dde3359b-4933-4284-8552-3119ae36a7c7", 00:17:32.672 "is_configured": true, 00:17:32.672 "data_offset": 0, 00:17:32.672 "data_size": 65536 00:17:32.672 } 00:17:32.672 ] 00:17:32.672 }' 00:17:32.672 11:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.672 11:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.240 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:33.240 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:33.240 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:33.240 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:33.240 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:33.240 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:33.240 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:33.240 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:33.240 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.240 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.241 [2024-11-05 11:33:32.258248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:33.241 "name": "Existed_Raid", 00:17:33.241 "aliases": [ 00:17:33.241 "77d3c65e-dc85-4c1f-9e26-70d9ca52c47f" 00:17:33.241 ], 00:17:33.241 "product_name": "Raid Volume", 00:17:33.241 "block_size": 512, 00:17:33.241 "num_blocks": 196608, 00:17:33.241 "uuid": "77d3c65e-dc85-4c1f-9e26-70d9ca52c47f", 00:17:33.241 "assigned_rate_limits": { 00:17:33.241 "rw_ios_per_sec": 0, 00:17:33.241 "rw_mbytes_per_sec": 0, 00:17:33.241 "r_mbytes_per_sec": 0, 00:17:33.241 "w_mbytes_per_sec": 0 00:17:33.241 }, 00:17:33.241 "claimed": false, 00:17:33.241 "zoned": false, 00:17:33.241 "supported_io_types": { 00:17:33.241 "read": true, 00:17:33.241 "write": true, 00:17:33.241 "unmap": false, 00:17:33.241 "flush": false, 00:17:33.241 "reset": true, 00:17:33.241 "nvme_admin": false, 00:17:33.241 "nvme_io": false, 00:17:33.241 "nvme_io_md": false, 00:17:33.241 "write_zeroes": true, 00:17:33.241 "zcopy": false, 00:17:33.241 "get_zone_info": false, 00:17:33.241 "zone_management": false, 00:17:33.241 "zone_append": false, 00:17:33.241 "compare": false, 00:17:33.241 "compare_and_write": false, 00:17:33.241 "abort": false, 00:17:33.241 "seek_hole": false, 00:17:33.241 "seek_data": false, 00:17:33.241 "copy": false, 00:17:33.241 "nvme_iov_md": false 00:17:33.241 }, 00:17:33.241 "driver_specific": { 00:17:33.241 "raid": { 00:17:33.241 "uuid": "77d3c65e-dc85-4c1f-9e26-70d9ca52c47f", 00:17:33.241 "strip_size_kb": 64, 00:17:33.241 "state": "online", 00:17:33.241 "raid_level": "raid5f", 00:17:33.241 "superblock": false, 00:17:33.241 "num_base_bdevs": 4, 00:17:33.241 "num_base_bdevs_discovered": 4, 00:17:33.241 "num_base_bdevs_operational": 4, 00:17:33.241 "base_bdevs_list": [ 00:17:33.241 { 00:17:33.241 "name": "NewBaseBdev", 00:17:33.241 "uuid": "2bcaa17c-21d7-40d0-b7c9-993235ac186e", 00:17:33.241 "is_configured": true, 00:17:33.241 "data_offset": 0, 00:17:33.241 "data_size": 65536 00:17:33.241 }, 00:17:33.241 { 00:17:33.241 "name": "BaseBdev2", 00:17:33.241 "uuid": "3cc2dfd4-4a28-45ac-8086-42b28a20a441", 00:17:33.241 "is_configured": true, 00:17:33.241 "data_offset": 0, 00:17:33.241 "data_size": 65536 00:17:33.241 }, 00:17:33.241 { 00:17:33.241 "name": "BaseBdev3", 00:17:33.241 "uuid": "3b797dd8-ff63-4a48-bd8b-18e193a03447", 00:17:33.241 "is_configured": true, 00:17:33.241 "data_offset": 0, 00:17:33.241 "data_size": 65536 00:17:33.241 }, 00:17:33.241 { 00:17:33.241 "name": "BaseBdev4", 00:17:33.241 "uuid": "dde3359b-4933-4284-8552-3119ae36a7c7", 00:17:33.241 "is_configured": true, 00:17:33.241 "data_offset": 0, 00:17:33.241 "data_size": 65536 00:17:33.241 } 00:17:33.241 ] 00:17:33.241 } 00:17:33.241 } 00:17:33.241 }' 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:33.241 BaseBdev2 00:17:33.241 BaseBdev3 00:17:33.241 BaseBdev4' 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.241 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.501 [2024-11-05 11:33:32.601441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:33.501 [2024-11-05 11:33:32.601467] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.501 [2024-11-05 11:33:32.601533] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.501 [2024-11-05 11:33:32.601818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.501 [2024-11-05 11:33:32.601828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82796 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 82796 ']' 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 82796 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82796 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82796' 00:17:33.501 killing process with pid 82796 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 82796 00:17:33.501 [2024-11-05 11:33:32.651533] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.501 11:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 82796 00:17:33.761 [2024-11-05 11:33:33.021947] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:35.141 00:17:35.141 real 0m11.583s 00:17:35.141 user 0m18.413s 00:17:35.141 sys 0m2.255s 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:35.141 ************************************ 00:17:35.141 END TEST raid5f_state_function_test 00:17:35.141 ************************************ 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.141 11:33:34 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:17:35.141 11:33:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:35.141 11:33:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:35.141 11:33:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:35.141 ************************************ 00:17:35.141 START TEST raid5f_state_function_test_sb 00:17:35.141 ************************************ 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:35.141 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83471 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83471' 00:17:35.142 Process raid pid: 83471 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83471 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 83471 ']' 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:35.142 11:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.142 [2024-11-05 11:33:34.256985] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:17:35.142 [2024-11-05 11:33:34.257181] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.401 [2024-11-05 11:33:34.443754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.401 [2024-11-05 11:33:34.552401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.661 [2024-11-05 11:33:34.748993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.661 [2024-11-05 11:33:34.749080] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.920 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:35.920 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:35.920 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:35.920 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.920 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.920 [2024-11-05 11:33:35.061101] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:35.920 [2024-11-05 11:33:35.061228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:35.920 [2024-11-05 11:33:35.061250] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.920 [2024-11-05 11:33:35.061260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.920 [2024-11-05 11:33:35.061266] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.920 [2024-11-05 11:33:35.061275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.920 [2024-11-05 11:33:35.061281] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:35.920 [2024-11-05 11:33:35.061290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:35.920 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.920 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:35.920 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.920 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.920 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.920 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.920 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.921 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.921 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.921 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.921 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.921 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.921 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.921 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.921 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.921 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.921 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.921 "name": "Existed_Raid", 00:17:35.921 "uuid": "4d7aff8b-c52e-45d3-a2b0-eff5d8688433", 00:17:35.921 "strip_size_kb": 64, 00:17:35.921 "state": "configuring", 00:17:35.921 "raid_level": "raid5f", 00:17:35.921 "superblock": true, 00:17:35.921 "num_base_bdevs": 4, 00:17:35.921 "num_base_bdevs_discovered": 0, 00:17:35.921 "num_base_bdevs_operational": 4, 00:17:35.921 "base_bdevs_list": [ 00:17:35.921 { 00:17:35.921 "name": "BaseBdev1", 00:17:35.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.921 "is_configured": false, 00:17:35.921 "data_offset": 0, 00:17:35.921 "data_size": 0 00:17:35.921 }, 00:17:35.921 { 00:17:35.921 "name": "BaseBdev2", 00:17:35.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.921 "is_configured": false, 00:17:35.921 "data_offset": 0, 00:17:35.921 "data_size": 0 00:17:35.921 }, 00:17:35.921 { 00:17:35.921 "name": "BaseBdev3", 00:17:35.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.921 "is_configured": false, 00:17:35.921 "data_offset": 0, 00:17:35.921 "data_size": 0 00:17:35.921 }, 00:17:35.921 { 00:17:35.921 "name": "BaseBdev4", 00:17:35.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.921 "is_configured": false, 00:17:35.921 "data_offset": 0, 00:17:35.921 "data_size": 0 00:17:35.921 } 00:17:35.921 ] 00:17:35.921 }' 00:17:35.921 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.921 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.490 [2024-11-05 11:33:35.508261] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:36.490 [2024-11-05 11:33:35.508344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.490 [2024-11-05 11:33:35.520256] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:36.490 [2024-11-05 11:33:35.520328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:36.490 [2024-11-05 11:33:35.520355] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:36.490 [2024-11-05 11:33:35.520377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:36.490 [2024-11-05 11:33:35.520394] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:36.490 [2024-11-05 11:33:35.520430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:36.490 [2024-11-05 11:33:35.520447] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:36.490 [2024-11-05 11:33:35.520475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.490 [2024-11-05 11:33:35.565812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:36.490 BaseBdev1 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.490 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.490 [ 00:17:36.490 { 00:17:36.490 "name": "BaseBdev1", 00:17:36.490 "aliases": [ 00:17:36.490 "3d742e42-5f55-42b7-8338-7120fbb9364a" 00:17:36.490 ], 00:17:36.490 "product_name": "Malloc disk", 00:17:36.490 "block_size": 512, 00:17:36.490 "num_blocks": 65536, 00:17:36.490 "uuid": "3d742e42-5f55-42b7-8338-7120fbb9364a", 00:17:36.490 "assigned_rate_limits": { 00:17:36.490 "rw_ios_per_sec": 0, 00:17:36.490 "rw_mbytes_per_sec": 0, 00:17:36.490 "r_mbytes_per_sec": 0, 00:17:36.490 "w_mbytes_per_sec": 0 00:17:36.490 }, 00:17:36.490 "claimed": true, 00:17:36.490 "claim_type": "exclusive_write", 00:17:36.490 "zoned": false, 00:17:36.490 "supported_io_types": { 00:17:36.490 "read": true, 00:17:36.490 "write": true, 00:17:36.490 "unmap": true, 00:17:36.490 "flush": true, 00:17:36.490 "reset": true, 00:17:36.490 "nvme_admin": false, 00:17:36.490 "nvme_io": false, 00:17:36.490 "nvme_io_md": false, 00:17:36.490 "write_zeroes": true, 00:17:36.490 "zcopy": true, 00:17:36.490 "get_zone_info": false, 00:17:36.490 "zone_management": false, 00:17:36.490 "zone_append": false, 00:17:36.490 "compare": false, 00:17:36.491 "compare_and_write": false, 00:17:36.491 "abort": true, 00:17:36.491 "seek_hole": false, 00:17:36.491 "seek_data": false, 00:17:36.491 "copy": true, 00:17:36.491 "nvme_iov_md": false 00:17:36.491 }, 00:17:36.491 "memory_domains": [ 00:17:36.491 { 00:17:36.491 "dma_device_id": "system", 00:17:36.491 "dma_device_type": 1 00:17:36.491 }, 00:17:36.491 { 00:17:36.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.491 "dma_device_type": 2 00:17:36.491 } 00:17:36.491 ], 00:17:36.491 "driver_specific": {} 00:17:36.491 } 00:17:36.491 ] 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.491 "name": "Existed_Raid", 00:17:36.491 "uuid": "44cfc231-62b5-4fab-8a70-95ea60fb3742", 00:17:36.491 "strip_size_kb": 64, 00:17:36.491 "state": "configuring", 00:17:36.491 "raid_level": "raid5f", 00:17:36.491 "superblock": true, 00:17:36.491 "num_base_bdevs": 4, 00:17:36.491 "num_base_bdevs_discovered": 1, 00:17:36.491 "num_base_bdevs_operational": 4, 00:17:36.491 "base_bdevs_list": [ 00:17:36.491 { 00:17:36.491 "name": "BaseBdev1", 00:17:36.491 "uuid": "3d742e42-5f55-42b7-8338-7120fbb9364a", 00:17:36.491 "is_configured": true, 00:17:36.491 "data_offset": 2048, 00:17:36.491 "data_size": 63488 00:17:36.491 }, 00:17:36.491 { 00:17:36.491 "name": "BaseBdev2", 00:17:36.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.491 "is_configured": false, 00:17:36.491 "data_offset": 0, 00:17:36.491 "data_size": 0 00:17:36.491 }, 00:17:36.491 { 00:17:36.491 "name": "BaseBdev3", 00:17:36.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.491 "is_configured": false, 00:17:36.491 "data_offset": 0, 00:17:36.491 "data_size": 0 00:17:36.491 }, 00:17:36.491 { 00:17:36.491 "name": "BaseBdev4", 00:17:36.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.491 "is_configured": false, 00:17:36.491 "data_offset": 0, 00:17:36.491 "data_size": 0 00:17:36.491 } 00:17:36.491 ] 00:17:36.491 }' 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.491 11:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.060 [2024-11-05 11:33:36.076944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:37.060 [2024-11-05 11:33:36.076987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.060 [2024-11-05 11:33:36.088984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.060 [2024-11-05 11:33:36.090740] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.060 [2024-11-05 11:33:36.090851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.060 [2024-11-05 11:33:36.090865] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:37.060 [2024-11-05 11:33:36.090886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:37.060 [2024-11-05 11:33:36.090892] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:37.060 [2024-11-05 11:33:36.090900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.060 "name": "Existed_Raid", 00:17:37.060 "uuid": "3683f1c3-0198-456a-a8ff-0285163a87f7", 00:17:37.060 "strip_size_kb": 64, 00:17:37.060 "state": "configuring", 00:17:37.060 "raid_level": "raid5f", 00:17:37.060 "superblock": true, 00:17:37.060 "num_base_bdevs": 4, 00:17:37.060 "num_base_bdevs_discovered": 1, 00:17:37.060 "num_base_bdevs_operational": 4, 00:17:37.060 "base_bdevs_list": [ 00:17:37.060 { 00:17:37.060 "name": "BaseBdev1", 00:17:37.060 "uuid": "3d742e42-5f55-42b7-8338-7120fbb9364a", 00:17:37.060 "is_configured": true, 00:17:37.060 "data_offset": 2048, 00:17:37.060 "data_size": 63488 00:17:37.060 }, 00:17:37.060 { 00:17:37.060 "name": "BaseBdev2", 00:17:37.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.060 "is_configured": false, 00:17:37.060 "data_offset": 0, 00:17:37.060 "data_size": 0 00:17:37.060 }, 00:17:37.060 { 00:17:37.060 "name": "BaseBdev3", 00:17:37.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.060 "is_configured": false, 00:17:37.060 "data_offset": 0, 00:17:37.060 "data_size": 0 00:17:37.060 }, 00:17:37.060 { 00:17:37.060 "name": "BaseBdev4", 00:17:37.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.060 "is_configured": false, 00:17:37.060 "data_offset": 0, 00:17:37.060 "data_size": 0 00:17:37.060 } 00:17:37.060 ] 00:17:37.060 }' 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.060 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.320 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:37.320 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.320 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.320 [2024-11-05 11:33:36.588753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:37.320 BaseBdev2 00:17:37.320 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.320 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:37.320 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:37.320 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:37.320 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:37.320 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:37.320 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:37.320 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:37.320 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.320 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.579 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.579 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:37.579 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.579 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.579 [ 00:17:37.579 { 00:17:37.579 "name": "BaseBdev2", 00:17:37.579 "aliases": [ 00:17:37.579 "6069f2b1-0f74-40f4-800f-5749258006de" 00:17:37.579 ], 00:17:37.579 "product_name": "Malloc disk", 00:17:37.579 "block_size": 512, 00:17:37.579 "num_blocks": 65536, 00:17:37.579 "uuid": "6069f2b1-0f74-40f4-800f-5749258006de", 00:17:37.579 "assigned_rate_limits": { 00:17:37.579 "rw_ios_per_sec": 0, 00:17:37.579 "rw_mbytes_per_sec": 0, 00:17:37.579 "r_mbytes_per_sec": 0, 00:17:37.580 "w_mbytes_per_sec": 0 00:17:37.580 }, 00:17:37.580 "claimed": true, 00:17:37.580 "claim_type": "exclusive_write", 00:17:37.580 "zoned": false, 00:17:37.580 "supported_io_types": { 00:17:37.580 "read": true, 00:17:37.580 "write": true, 00:17:37.580 "unmap": true, 00:17:37.580 "flush": true, 00:17:37.580 "reset": true, 00:17:37.580 "nvme_admin": false, 00:17:37.580 "nvme_io": false, 00:17:37.580 "nvme_io_md": false, 00:17:37.580 "write_zeroes": true, 00:17:37.580 "zcopy": true, 00:17:37.580 "get_zone_info": false, 00:17:37.580 "zone_management": false, 00:17:37.580 "zone_append": false, 00:17:37.580 "compare": false, 00:17:37.580 "compare_and_write": false, 00:17:37.580 "abort": true, 00:17:37.580 "seek_hole": false, 00:17:37.580 "seek_data": false, 00:17:37.580 "copy": true, 00:17:37.580 "nvme_iov_md": false 00:17:37.580 }, 00:17:37.580 "memory_domains": [ 00:17:37.580 { 00:17:37.580 "dma_device_id": "system", 00:17:37.580 "dma_device_type": 1 00:17:37.580 }, 00:17:37.580 { 00:17:37.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.580 "dma_device_type": 2 00:17:37.580 } 00:17:37.580 ], 00:17:37.580 "driver_specific": {} 00:17:37.580 } 00:17:37.580 ] 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.580 "name": "Existed_Raid", 00:17:37.580 "uuid": "3683f1c3-0198-456a-a8ff-0285163a87f7", 00:17:37.580 "strip_size_kb": 64, 00:17:37.580 "state": "configuring", 00:17:37.580 "raid_level": "raid5f", 00:17:37.580 "superblock": true, 00:17:37.580 "num_base_bdevs": 4, 00:17:37.580 "num_base_bdevs_discovered": 2, 00:17:37.580 "num_base_bdevs_operational": 4, 00:17:37.580 "base_bdevs_list": [ 00:17:37.580 { 00:17:37.580 "name": "BaseBdev1", 00:17:37.580 "uuid": "3d742e42-5f55-42b7-8338-7120fbb9364a", 00:17:37.580 "is_configured": true, 00:17:37.580 "data_offset": 2048, 00:17:37.580 "data_size": 63488 00:17:37.580 }, 00:17:37.580 { 00:17:37.580 "name": "BaseBdev2", 00:17:37.580 "uuid": "6069f2b1-0f74-40f4-800f-5749258006de", 00:17:37.580 "is_configured": true, 00:17:37.580 "data_offset": 2048, 00:17:37.580 "data_size": 63488 00:17:37.580 }, 00:17:37.580 { 00:17:37.580 "name": "BaseBdev3", 00:17:37.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.580 "is_configured": false, 00:17:37.580 "data_offset": 0, 00:17:37.580 "data_size": 0 00:17:37.580 }, 00:17:37.580 { 00:17:37.580 "name": "BaseBdev4", 00:17:37.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.580 "is_configured": false, 00:17:37.580 "data_offset": 0, 00:17:37.580 "data_size": 0 00:17:37.580 } 00:17:37.580 ] 00:17:37.580 }' 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.580 11:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.840 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:37.840 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.840 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.101 [2024-11-05 11:33:37.158937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:38.101 BaseBdev3 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.101 [ 00:17:38.101 { 00:17:38.101 "name": "BaseBdev3", 00:17:38.101 "aliases": [ 00:17:38.101 "ad905dee-a6de-416b-91da-d400b50feabf" 00:17:38.101 ], 00:17:38.101 "product_name": "Malloc disk", 00:17:38.101 "block_size": 512, 00:17:38.101 "num_blocks": 65536, 00:17:38.101 "uuid": "ad905dee-a6de-416b-91da-d400b50feabf", 00:17:38.101 "assigned_rate_limits": { 00:17:38.101 "rw_ios_per_sec": 0, 00:17:38.101 "rw_mbytes_per_sec": 0, 00:17:38.101 "r_mbytes_per_sec": 0, 00:17:38.101 "w_mbytes_per_sec": 0 00:17:38.101 }, 00:17:38.101 "claimed": true, 00:17:38.101 "claim_type": "exclusive_write", 00:17:38.101 "zoned": false, 00:17:38.101 "supported_io_types": { 00:17:38.101 "read": true, 00:17:38.101 "write": true, 00:17:38.101 "unmap": true, 00:17:38.101 "flush": true, 00:17:38.101 "reset": true, 00:17:38.101 "nvme_admin": false, 00:17:38.101 "nvme_io": false, 00:17:38.101 "nvme_io_md": false, 00:17:38.101 "write_zeroes": true, 00:17:38.101 "zcopy": true, 00:17:38.101 "get_zone_info": false, 00:17:38.101 "zone_management": false, 00:17:38.101 "zone_append": false, 00:17:38.101 "compare": false, 00:17:38.101 "compare_and_write": false, 00:17:38.101 "abort": true, 00:17:38.101 "seek_hole": false, 00:17:38.101 "seek_data": false, 00:17:38.101 "copy": true, 00:17:38.101 "nvme_iov_md": false 00:17:38.101 }, 00:17:38.101 "memory_domains": [ 00:17:38.101 { 00:17:38.101 "dma_device_id": "system", 00:17:38.101 "dma_device_type": 1 00:17:38.101 }, 00:17:38.101 { 00:17:38.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.101 "dma_device_type": 2 00:17:38.101 } 00:17:38.101 ], 00:17:38.101 "driver_specific": {} 00:17:38.101 } 00:17:38.101 ] 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.101 "name": "Existed_Raid", 00:17:38.101 "uuid": "3683f1c3-0198-456a-a8ff-0285163a87f7", 00:17:38.101 "strip_size_kb": 64, 00:17:38.101 "state": "configuring", 00:17:38.101 "raid_level": "raid5f", 00:17:38.101 "superblock": true, 00:17:38.101 "num_base_bdevs": 4, 00:17:38.101 "num_base_bdevs_discovered": 3, 00:17:38.101 "num_base_bdevs_operational": 4, 00:17:38.101 "base_bdevs_list": [ 00:17:38.101 { 00:17:38.101 "name": "BaseBdev1", 00:17:38.101 "uuid": "3d742e42-5f55-42b7-8338-7120fbb9364a", 00:17:38.101 "is_configured": true, 00:17:38.101 "data_offset": 2048, 00:17:38.101 "data_size": 63488 00:17:38.101 }, 00:17:38.101 { 00:17:38.101 "name": "BaseBdev2", 00:17:38.101 "uuid": "6069f2b1-0f74-40f4-800f-5749258006de", 00:17:38.101 "is_configured": true, 00:17:38.101 "data_offset": 2048, 00:17:38.101 "data_size": 63488 00:17:38.101 }, 00:17:38.101 { 00:17:38.101 "name": "BaseBdev3", 00:17:38.101 "uuid": "ad905dee-a6de-416b-91da-d400b50feabf", 00:17:38.101 "is_configured": true, 00:17:38.101 "data_offset": 2048, 00:17:38.101 "data_size": 63488 00:17:38.101 }, 00:17:38.101 { 00:17:38.101 "name": "BaseBdev4", 00:17:38.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.101 "is_configured": false, 00:17:38.101 "data_offset": 0, 00:17:38.101 "data_size": 0 00:17:38.101 } 00:17:38.101 ] 00:17:38.101 }' 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.101 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.360 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:38.360 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.360 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.620 [2024-11-05 11:33:37.671157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:38.620 [2024-11-05 11:33:37.671552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:38.620 [2024-11-05 11:33:37.671605] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:38.620 [2024-11-05 11:33:37.671887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:38.620 BaseBdev4 00:17:38.620 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.620 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:38.620 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:38.620 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:38.620 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:38.620 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:38.620 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:38.620 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:38.620 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.620 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.620 [2024-11-05 11:33:37.678985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:38.620 [2024-11-05 11:33:37.679040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:38.620 [2024-11-05 11:33:37.679359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.620 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.620 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:38.620 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.620 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.620 [ 00:17:38.620 { 00:17:38.620 "name": "BaseBdev4", 00:17:38.620 "aliases": [ 00:17:38.620 "01e44e4b-731b-4f77-87bd-3f6cab51089b" 00:17:38.620 ], 00:17:38.620 "product_name": "Malloc disk", 00:17:38.620 "block_size": 512, 00:17:38.620 "num_blocks": 65536, 00:17:38.620 "uuid": "01e44e4b-731b-4f77-87bd-3f6cab51089b", 00:17:38.620 "assigned_rate_limits": { 00:17:38.620 "rw_ios_per_sec": 0, 00:17:38.620 "rw_mbytes_per_sec": 0, 00:17:38.620 "r_mbytes_per_sec": 0, 00:17:38.620 "w_mbytes_per_sec": 0 00:17:38.620 }, 00:17:38.620 "claimed": true, 00:17:38.620 "claim_type": "exclusive_write", 00:17:38.620 "zoned": false, 00:17:38.620 "supported_io_types": { 00:17:38.620 "read": true, 00:17:38.620 "write": true, 00:17:38.620 "unmap": true, 00:17:38.620 "flush": true, 00:17:38.620 "reset": true, 00:17:38.621 "nvme_admin": false, 00:17:38.621 "nvme_io": false, 00:17:38.621 "nvme_io_md": false, 00:17:38.621 "write_zeroes": true, 00:17:38.621 "zcopy": true, 00:17:38.621 "get_zone_info": false, 00:17:38.621 "zone_management": false, 00:17:38.621 "zone_append": false, 00:17:38.621 "compare": false, 00:17:38.621 "compare_and_write": false, 00:17:38.621 "abort": true, 00:17:38.621 "seek_hole": false, 00:17:38.621 "seek_data": false, 00:17:38.621 "copy": true, 00:17:38.621 "nvme_iov_md": false 00:17:38.621 }, 00:17:38.621 "memory_domains": [ 00:17:38.621 { 00:17:38.621 "dma_device_id": "system", 00:17:38.621 "dma_device_type": 1 00:17:38.621 }, 00:17:38.621 { 00:17:38.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.621 "dma_device_type": 2 00:17:38.621 } 00:17:38.621 ], 00:17:38.621 "driver_specific": {} 00:17:38.621 } 00:17:38.621 ] 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.621 "name": "Existed_Raid", 00:17:38.621 "uuid": "3683f1c3-0198-456a-a8ff-0285163a87f7", 00:17:38.621 "strip_size_kb": 64, 00:17:38.621 "state": "online", 00:17:38.621 "raid_level": "raid5f", 00:17:38.621 "superblock": true, 00:17:38.621 "num_base_bdevs": 4, 00:17:38.621 "num_base_bdevs_discovered": 4, 00:17:38.621 "num_base_bdevs_operational": 4, 00:17:38.621 "base_bdevs_list": [ 00:17:38.621 { 00:17:38.621 "name": "BaseBdev1", 00:17:38.621 "uuid": "3d742e42-5f55-42b7-8338-7120fbb9364a", 00:17:38.621 "is_configured": true, 00:17:38.621 "data_offset": 2048, 00:17:38.621 "data_size": 63488 00:17:38.621 }, 00:17:38.621 { 00:17:38.621 "name": "BaseBdev2", 00:17:38.621 "uuid": "6069f2b1-0f74-40f4-800f-5749258006de", 00:17:38.621 "is_configured": true, 00:17:38.621 "data_offset": 2048, 00:17:38.621 "data_size": 63488 00:17:38.621 }, 00:17:38.621 { 00:17:38.621 "name": "BaseBdev3", 00:17:38.621 "uuid": "ad905dee-a6de-416b-91da-d400b50feabf", 00:17:38.621 "is_configured": true, 00:17:38.621 "data_offset": 2048, 00:17:38.621 "data_size": 63488 00:17:38.621 }, 00:17:38.621 { 00:17:38.621 "name": "BaseBdev4", 00:17:38.621 "uuid": "01e44e4b-731b-4f77-87bd-3f6cab51089b", 00:17:38.621 "is_configured": true, 00:17:38.621 "data_offset": 2048, 00:17:38.621 "data_size": 63488 00:17:38.621 } 00:17:38.621 ] 00:17:38.621 }' 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.621 11:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.880 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:38.880 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:38.880 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:38.880 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:38.880 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:38.880 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:38.880 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:38.880 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.880 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.880 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:38.880 [2024-11-05 11:33:38.134635] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.880 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:39.140 "name": "Existed_Raid", 00:17:39.140 "aliases": [ 00:17:39.140 "3683f1c3-0198-456a-a8ff-0285163a87f7" 00:17:39.140 ], 00:17:39.140 "product_name": "Raid Volume", 00:17:39.140 "block_size": 512, 00:17:39.140 "num_blocks": 190464, 00:17:39.140 "uuid": "3683f1c3-0198-456a-a8ff-0285163a87f7", 00:17:39.140 "assigned_rate_limits": { 00:17:39.140 "rw_ios_per_sec": 0, 00:17:39.140 "rw_mbytes_per_sec": 0, 00:17:39.140 "r_mbytes_per_sec": 0, 00:17:39.140 "w_mbytes_per_sec": 0 00:17:39.140 }, 00:17:39.140 "claimed": false, 00:17:39.140 "zoned": false, 00:17:39.140 "supported_io_types": { 00:17:39.140 "read": true, 00:17:39.140 "write": true, 00:17:39.140 "unmap": false, 00:17:39.140 "flush": false, 00:17:39.140 "reset": true, 00:17:39.140 "nvme_admin": false, 00:17:39.140 "nvme_io": false, 00:17:39.140 "nvme_io_md": false, 00:17:39.140 "write_zeroes": true, 00:17:39.140 "zcopy": false, 00:17:39.140 "get_zone_info": false, 00:17:39.140 "zone_management": false, 00:17:39.140 "zone_append": false, 00:17:39.140 "compare": false, 00:17:39.140 "compare_and_write": false, 00:17:39.140 "abort": false, 00:17:39.140 "seek_hole": false, 00:17:39.140 "seek_data": false, 00:17:39.140 "copy": false, 00:17:39.140 "nvme_iov_md": false 00:17:39.140 }, 00:17:39.140 "driver_specific": { 00:17:39.140 "raid": { 00:17:39.140 "uuid": "3683f1c3-0198-456a-a8ff-0285163a87f7", 00:17:39.140 "strip_size_kb": 64, 00:17:39.140 "state": "online", 00:17:39.140 "raid_level": "raid5f", 00:17:39.140 "superblock": true, 00:17:39.140 "num_base_bdevs": 4, 00:17:39.140 "num_base_bdevs_discovered": 4, 00:17:39.140 "num_base_bdevs_operational": 4, 00:17:39.140 "base_bdevs_list": [ 00:17:39.140 { 00:17:39.140 "name": "BaseBdev1", 00:17:39.140 "uuid": "3d742e42-5f55-42b7-8338-7120fbb9364a", 00:17:39.140 "is_configured": true, 00:17:39.140 "data_offset": 2048, 00:17:39.140 "data_size": 63488 00:17:39.140 }, 00:17:39.140 { 00:17:39.140 "name": "BaseBdev2", 00:17:39.140 "uuid": "6069f2b1-0f74-40f4-800f-5749258006de", 00:17:39.140 "is_configured": true, 00:17:39.140 "data_offset": 2048, 00:17:39.140 "data_size": 63488 00:17:39.140 }, 00:17:39.140 { 00:17:39.140 "name": "BaseBdev3", 00:17:39.140 "uuid": "ad905dee-a6de-416b-91da-d400b50feabf", 00:17:39.140 "is_configured": true, 00:17:39.140 "data_offset": 2048, 00:17:39.140 "data_size": 63488 00:17:39.140 }, 00:17:39.140 { 00:17:39.140 "name": "BaseBdev4", 00:17:39.140 "uuid": "01e44e4b-731b-4f77-87bd-3f6cab51089b", 00:17:39.140 "is_configured": true, 00:17:39.140 "data_offset": 2048, 00:17:39.140 "data_size": 63488 00:17:39.140 } 00:17:39.140 ] 00:17:39.140 } 00:17:39.140 } 00:17:39.140 }' 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:39.140 BaseBdev2 00:17:39.140 BaseBdev3 00:17:39.140 BaseBdev4' 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.140 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.400 [2024-11-05 11:33:38.485866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.400 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.400 "name": "Existed_Raid", 00:17:39.400 "uuid": "3683f1c3-0198-456a-a8ff-0285163a87f7", 00:17:39.400 "strip_size_kb": 64, 00:17:39.400 "state": "online", 00:17:39.400 "raid_level": "raid5f", 00:17:39.400 "superblock": true, 00:17:39.400 "num_base_bdevs": 4, 00:17:39.400 "num_base_bdevs_discovered": 3, 00:17:39.400 "num_base_bdevs_operational": 3, 00:17:39.400 "base_bdevs_list": [ 00:17:39.400 { 00:17:39.400 "name": null, 00:17:39.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.400 "is_configured": false, 00:17:39.400 "data_offset": 0, 00:17:39.400 "data_size": 63488 00:17:39.400 }, 00:17:39.400 { 00:17:39.400 "name": "BaseBdev2", 00:17:39.400 "uuid": "6069f2b1-0f74-40f4-800f-5749258006de", 00:17:39.400 "is_configured": true, 00:17:39.400 "data_offset": 2048, 00:17:39.400 "data_size": 63488 00:17:39.400 }, 00:17:39.400 { 00:17:39.400 "name": "BaseBdev3", 00:17:39.400 "uuid": "ad905dee-a6de-416b-91da-d400b50feabf", 00:17:39.400 "is_configured": true, 00:17:39.400 "data_offset": 2048, 00:17:39.400 "data_size": 63488 00:17:39.400 }, 00:17:39.400 { 00:17:39.400 "name": "BaseBdev4", 00:17:39.400 "uuid": "01e44e4b-731b-4f77-87bd-3f6cab51089b", 00:17:39.400 "is_configured": true, 00:17:39.400 "data_offset": 2048, 00:17:39.400 "data_size": 63488 00:17:39.400 } 00:17:39.401 ] 00:17:39.401 }' 00:17:39.401 11:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.401 11:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.970 [2024-11-05 11:33:39.115508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:39.970 [2024-11-05 11:33:39.115721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.970 [2024-11-05 11:33:39.206860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:39.970 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.229 [2024-11-05 11:33:39.262770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.229 [2024-11-05 11:33:39.408943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:40.229 [2024-11-05 11:33:39.408992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:40.229 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.497 BaseBdev2 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.497 [ 00:17:40.497 { 00:17:40.497 "name": "BaseBdev2", 00:17:40.497 "aliases": [ 00:17:40.497 "d0d11973-f06e-46fe-83c9-a5bf6bffd22c" 00:17:40.497 ], 00:17:40.497 "product_name": "Malloc disk", 00:17:40.497 "block_size": 512, 00:17:40.497 "num_blocks": 65536, 00:17:40.497 "uuid": "d0d11973-f06e-46fe-83c9-a5bf6bffd22c", 00:17:40.497 "assigned_rate_limits": { 00:17:40.497 "rw_ios_per_sec": 0, 00:17:40.497 "rw_mbytes_per_sec": 0, 00:17:40.497 "r_mbytes_per_sec": 0, 00:17:40.497 "w_mbytes_per_sec": 0 00:17:40.497 }, 00:17:40.497 "claimed": false, 00:17:40.497 "zoned": false, 00:17:40.497 "supported_io_types": { 00:17:40.497 "read": true, 00:17:40.497 "write": true, 00:17:40.497 "unmap": true, 00:17:40.497 "flush": true, 00:17:40.497 "reset": true, 00:17:40.497 "nvme_admin": false, 00:17:40.497 "nvme_io": false, 00:17:40.497 "nvme_io_md": false, 00:17:40.497 "write_zeroes": true, 00:17:40.497 "zcopy": true, 00:17:40.497 "get_zone_info": false, 00:17:40.497 "zone_management": false, 00:17:40.497 "zone_append": false, 00:17:40.497 "compare": false, 00:17:40.497 "compare_and_write": false, 00:17:40.497 "abort": true, 00:17:40.497 "seek_hole": false, 00:17:40.497 "seek_data": false, 00:17:40.497 "copy": true, 00:17:40.497 "nvme_iov_md": false 00:17:40.497 }, 00:17:40.497 "memory_domains": [ 00:17:40.497 { 00:17:40.497 "dma_device_id": "system", 00:17:40.497 "dma_device_type": 1 00:17:40.497 }, 00:17:40.497 { 00:17:40.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.497 "dma_device_type": 2 00:17:40.497 } 00:17:40.497 ], 00:17:40.497 "driver_specific": {} 00:17:40.497 } 00:17:40.497 ] 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.497 BaseBdev3 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:40.497 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.498 [ 00:17:40.498 { 00:17:40.498 "name": "BaseBdev3", 00:17:40.498 "aliases": [ 00:17:40.498 "b1267546-ae0a-4efe-9e4c-1a36fd70ce8b" 00:17:40.498 ], 00:17:40.498 "product_name": "Malloc disk", 00:17:40.498 "block_size": 512, 00:17:40.498 "num_blocks": 65536, 00:17:40.498 "uuid": "b1267546-ae0a-4efe-9e4c-1a36fd70ce8b", 00:17:40.498 "assigned_rate_limits": { 00:17:40.498 "rw_ios_per_sec": 0, 00:17:40.498 "rw_mbytes_per_sec": 0, 00:17:40.498 "r_mbytes_per_sec": 0, 00:17:40.498 "w_mbytes_per_sec": 0 00:17:40.498 }, 00:17:40.498 "claimed": false, 00:17:40.498 "zoned": false, 00:17:40.498 "supported_io_types": { 00:17:40.498 "read": true, 00:17:40.498 "write": true, 00:17:40.498 "unmap": true, 00:17:40.498 "flush": true, 00:17:40.498 "reset": true, 00:17:40.498 "nvme_admin": false, 00:17:40.498 "nvme_io": false, 00:17:40.498 "nvme_io_md": false, 00:17:40.498 "write_zeroes": true, 00:17:40.498 "zcopy": true, 00:17:40.498 "get_zone_info": false, 00:17:40.498 "zone_management": false, 00:17:40.498 "zone_append": false, 00:17:40.498 "compare": false, 00:17:40.498 "compare_and_write": false, 00:17:40.498 "abort": true, 00:17:40.498 "seek_hole": false, 00:17:40.498 "seek_data": false, 00:17:40.498 "copy": true, 00:17:40.498 "nvme_iov_md": false 00:17:40.498 }, 00:17:40.498 "memory_domains": [ 00:17:40.498 { 00:17:40.498 "dma_device_id": "system", 00:17:40.498 "dma_device_type": 1 00:17:40.498 }, 00:17:40.498 { 00:17:40.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.498 "dma_device_type": 2 00:17:40.498 } 00:17:40.498 ], 00:17:40.498 "driver_specific": {} 00:17:40.498 } 00:17:40.498 ] 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.498 BaseBdev4 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.498 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.780 [ 00:17:40.780 { 00:17:40.780 "name": "BaseBdev4", 00:17:40.780 "aliases": [ 00:17:40.780 "6f037824-23a8-49e6-a1af-8abe25c0b62f" 00:17:40.780 ], 00:17:40.780 "product_name": "Malloc disk", 00:17:40.780 "block_size": 512, 00:17:40.780 "num_blocks": 65536, 00:17:40.780 "uuid": "6f037824-23a8-49e6-a1af-8abe25c0b62f", 00:17:40.780 "assigned_rate_limits": { 00:17:40.780 "rw_ios_per_sec": 0, 00:17:40.780 "rw_mbytes_per_sec": 0, 00:17:40.780 "r_mbytes_per_sec": 0, 00:17:40.780 "w_mbytes_per_sec": 0 00:17:40.780 }, 00:17:40.780 "claimed": false, 00:17:40.780 "zoned": false, 00:17:40.780 "supported_io_types": { 00:17:40.780 "read": true, 00:17:40.780 "write": true, 00:17:40.780 "unmap": true, 00:17:40.780 "flush": true, 00:17:40.780 "reset": true, 00:17:40.780 "nvme_admin": false, 00:17:40.780 "nvme_io": false, 00:17:40.780 "nvme_io_md": false, 00:17:40.780 "write_zeroes": true, 00:17:40.780 "zcopy": true, 00:17:40.780 "get_zone_info": false, 00:17:40.780 "zone_management": false, 00:17:40.780 "zone_append": false, 00:17:40.780 "compare": false, 00:17:40.780 "compare_and_write": false, 00:17:40.780 "abort": true, 00:17:40.780 "seek_hole": false, 00:17:40.780 "seek_data": false, 00:17:40.780 "copy": true, 00:17:40.780 "nvme_iov_md": false 00:17:40.780 }, 00:17:40.780 "memory_domains": [ 00:17:40.780 { 00:17:40.780 "dma_device_id": "system", 00:17:40.780 "dma_device_type": 1 00:17:40.780 }, 00:17:40.780 { 00:17:40.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.780 "dma_device_type": 2 00:17:40.780 } 00:17:40.780 ], 00:17:40.780 "driver_specific": {} 00:17:40.780 } 00:17:40.780 ] 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.780 [2024-11-05 11:33:39.793794] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:40.780 [2024-11-05 11:33:39.793893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:40.780 [2024-11-05 11:33:39.793934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:40.780 [2024-11-05 11:33:39.795742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:40.780 [2024-11-05 11:33:39.795837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.780 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.780 "name": "Existed_Raid", 00:17:40.780 "uuid": "6169ec01-a21d-4bf3-aa5d-e09409a406a8", 00:17:40.780 "strip_size_kb": 64, 00:17:40.780 "state": "configuring", 00:17:40.780 "raid_level": "raid5f", 00:17:40.780 "superblock": true, 00:17:40.780 "num_base_bdevs": 4, 00:17:40.780 "num_base_bdevs_discovered": 3, 00:17:40.780 "num_base_bdevs_operational": 4, 00:17:40.780 "base_bdevs_list": [ 00:17:40.780 { 00:17:40.780 "name": "BaseBdev1", 00:17:40.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.780 "is_configured": false, 00:17:40.780 "data_offset": 0, 00:17:40.780 "data_size": 0 00:17:40.780 }, 00:17:40.780 { 00:17:40.780 "name": "BaseBdev2", 00:17:40.780 "uuid": "d0d11973-f06e-46fe-83c9-a5bf6bffd22c", 00:17:40.780 "is_configured": true, 00:17:40.780 "data_offset": 2048, 00:17:40.780 "data_size": 63488 00:17:40.780 }, 00:17:40.780 { 00:17:40.780 "name": "BaseBdev3", 00:17:40.780 "uuid": "b1267546-ae0a-4efe-9e4c-1a36fd70ce8b", 00:17:40.780 "is_configured": true, 00:17:40.780 "data_offset": 2048, 00:17:40.781 "data_size": 63488 00:17:40.781 }, 00:17:40.781 { 00:17:40.781 "name": "BaseBdev4", 00:17:40.781 "uuid": "6f037824-23a8-49e6-a1af-8abe25c0b62f", 00:17:40.781 "is_configured": true, 00:17:40.781 "data_offset": 2048, 00:17:40.781 "data_size": 63488 00:17:40.781 } 00:17:40.781 ] 00:17:40.781 }' 00:17:40.781 11:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.781 11:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.073 [2024-11-05 11:33:40.245040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.073 "name": "Existed_Raid", 00:17:41.073 "uuid": "6169ec01-a21d-4bf3-aa5d-e09409a406a8", 00:17:41.073 "strip_size_kb": 64, 00:17:41.073 "state": "configuring", 00:17:41.073 "raid_level": "raid5f", 00:17:41.073 "superblock": true, 00:17:41.073 "num_base_bdevs": 4, 00:17:41.073 "num_base_bdevs_discovered": 2, 00:17:41.073 "num_base_bdevs_operational": 4, 00:17:41.073 "base_bdevs_list": [ 00:17:41.073 { 00:17:41.073 "name": "BaseBdev1", 00:17:41.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.073 "is_configured": false, 00:17:41.073 "data_offset": 0, 00:17:41.073 "data_size": 0 00:17:41.073 }, 00:17:41.073 { 00:17:41.073 "name": null, 00:17:41.073 "uuid": "d0d11973-f06e-46fe-83c9-a5bf6bffd22c", 00:17:41.073 "is_configured": false, 00:17:41.073 "data_offset": 0, 00:17:41.073 "data_size": 63488 00:17:41.073 }, 00:17:41.073 { 00:17:41.073 "name": "BaseBdev3", 00:17:41.073 "uuid": "b1267546-ae0a-4efe-9e4c-1a36fd70ce8b", 00:17:41.073 "is_configured": true, 00:17:41.073 "data_offset": 2048, 00:17:41.073 "data_size": 63488 00:17:41.073 }, 00:17:41.073 { 00:17:41.073 "name": "BaseBdev4", 00:17:41.073 "uuid": "6f037824-23a8-49e6-a1af-8abe25c0b62f", 00:17:41.073 "is_configured": true, 00:17:41.073 "data_offset": 2048, 00:17:41.073 "data_size": 63488 00:17:41.073 } 00:17:41.073 ] 00:17:41.073 }' 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.073 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.649 [2024-11-05 11:33:40.763577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:41.649 BaseBdev1 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.649 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.649 [ 00:17:41.649 { 00:17:41.649 "name": "BaseBdev1", 00:17:41.649 "aliases": [ 00:17:41.649 "18d1cdeb-6587-45ab-a2ae-df181e83ad99" 00:17:41.649 ], 00:17:41.649 "product_name": "Malloc disk", 00:17:41.649 "block_size": 512, 00:17:41.649 "num_blocks": 65536, 00:17:41.649 "uuid": "18d1cdeb-6587-45ab-a2ae-df181e83ad99", 00:17:41.649 "assigned_rate_limits": { 00:17:41.649 "rw_ios_per_sec": 0, 00:17:41.649 "rw_mbytes_per_sec": 0, 00:17:41.649 "r_mbytes_per_sec": 0, 00:17:41.649 "w_mbytes_per_sec": 0 00:17:41.649 }, 00:17:41.649 "claimed": true, 00:17:41.649 "claim_type": "exclusive_write", 00:17:41.649 "zoned": false, 00:17:41.649 "supported_io_types": { 00:17:41.649 "read": true, 00:17:41.649 "write": true, 00:17:41.649 "unmap": true, 00:17:41.649 "flush": true, 00:17:41.649 "reset": true, 00:17:41.649 "nvme_admin": false, 00:17:41.649 "nvme_io": false, 00:17:41.649 "nvme_io_md": false, 00:17:41.649 "write_zeroes": true, 00:17:41.649 "zcopy": true, 00:17:41.649 "get_zone_info": false, 00:17:41.649 "zone_management": false, 00:17:41.649 "zone_append": false, 00:17:41.649 "compare": false, 00:17:41.649 "compare_and_write": false, 00:17:41.649 "abort": true, 00:17:41.649 "seek_hole": false, 00:17:41.649 "seek_data": false, 00:17:41.649 "copy": true, 00:17:41.649 "nvme_iov_md": false 00:17:41.649 }, 00:17:41.649 "memory_domains": [ 00:17:41.650 { 00:17:41.650 "dma_device_id": "system", 00:17:41.650 "dma_device_type": 1 00:17:41.650 }, 00:17:41.650 { 00:17:41.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.650 "dma_device_type": 2 00:17:41.650 } 00:17:41.650 ], 00:17:41.650 "driver_specific": {} 00:17:41.650 } 00:17:41.650 ] 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.650 "name": "Existed_Raid", 00:17:41.650 "uuid": "6169ec01-a21d-4bf3-aa5d-e09409a406a8", 00:17:41.650 "strip_size_kb": 64, 00:17:41.650 "state": "configuring", 00:17:41.650 "raid_level": "raid5f", 00:17:41.650 "superblock": true, 00:17:41.650 "num_base_bdevs": 4, 00:17:41.650 "num_base_bdevs_discovered": 3, 00:17:41.650 "num_base_bdevs_operational": 4, 00:17:41.650 "base_bdevs_list": [ 00:17:41.650 { 00:17:41.650 "name": "BaseBdev1", 00:17:41.650 "uuid": "18d1cdeb-6587-45ab-a2ae-df181e83ad99", 00:17:41.650 "is_configured": true, 00:17:41.650 "data_offset": 2048, 00:17:41.650 "data_size": 63488 00:17:41.650 }, 00:17:41.650 { 00:17:41.650 "name": null, 00:17:41.650 "uuid": "d0d11973-f06e-46fe-83c9-a5bf6bffd22c", 00:17:41.650 "is_configured": false, 00:17:41.650 "data_offset": 0, 00:17:41.650 "data_size": 63488 00:17:41.650 }, 00:17:41.650 { 00:17:41.650 "name": "BaseBdev3", 00:17:41.650 "uuid": "b1267546-ae0a-4efe-9e4c-1a36fd70ce8b", 00:17:41.650 "is_configured": true, 00:17:41.650 "data_offset": 2048, 00:17:41.650 "data_size": 63488 00:17:41.650 }, 00:17:41.650 { 00:17:41.650 "name": "BaseBdev4", 00:17:41.650 "uuid": "6f037824-23a8-49e6-a1af-8abe25c0b62f", 00:17:41.650 "is_configured": true, 00:17:41.650 "data_offset": 2048, 00:17:41.650 "data_size": 63488 00:17:41.650 } 00:17:41.650 ] 00:17:41.650 }' 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.650 11:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.218 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.218 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:42.218 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.219 [2024-11-05 11:33:41.326688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.219 "name": "Existed_Raid", 00:17:42.219 "uuid": "6169ec01-a21d-4bf3-aa5d-e09409a406a8", 00:17:42.219 "strip_size_kb": 64, 00:17:42.219 "state": "configuring", 00:17:42.219 "raid_level": "raid5f", 00:17:42.219 "superblock": true, 00:17:42.219 "num_base_bdevs": 4, 00:17:42.219 "num_base_bdevs_discovered": 2, 00:17:42.219 "num_base_bdevs_operational": 4, 00:17:42.219 "base_bdevs_list": [ 00:17:42.219 { 00:17:42.219 "name": "BaseBdev1", 00:17:42.219 "uuid": "18d1cdeb-6587-45ab-a2ae-df181e83ad99", 00:17:42.219 "is_configured": true, 00:17:42.219 "data_offset": 2048, 00:17:42.219 "data_size": 63488 00:17:42.219 }, 00:17:42.219 { 00:17:42.219 "name": null, 00:17:42.219 "uuid": "d0d11973-f06e-46fe-83c9-a5bf6bffd22c", 00:17:42.219 "is_configured": false, 00:17:42.219 "data_offset": 0, 00:17:42.219 "data_size": 63488 00:17:42.219 }, 00:17:42.219 { 00:17:42.219 "name": null, 00:17:42.219 "uuid": "b1267546-ae0a-4efe-9e4c-1a36fd70ce8b", 00:17:42.219 "is_configured": false, 00:17:42.219 "data_offset": 0, 00:17:42.219 "data_size": 63488 00:17:42.219 }, 00:17:42.219 { 00:17:42.219 "name": "BaseBdev4", 00:17:42.219 "uuid": "6f037824-23a8-49e6-a1af-8abe25c0b62f", 00:17:42.219 "is_configured": true, 00:17:42.219 "data_offset": 2048, 00:17:42.219 "data_size": 63488 00:17:42.219 } 00:17:42.219 ] 00:17:42.219 }' 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.219 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.787 [2024-11-05 11:33:41.825821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.787 "name": "Existed_Raid", 00:17:42.787 "uuid": "6169ec01-a21d-4bf3-aa5d-e09409a406a8", 00:17:42.787 "strip_size_kb": 64, 00:17:42.787 "state": "configuring", 00:17:42.787 "raid_level": "raid5f", 00:17:42.787 "superblock": true, 00:17:42.787 "num_base_bdevs": 4, 00:17:42.787 "num_base_bdevs_discovered": 3, 00:17:42.787 "num_base_bdevs_operational": 4, 00:17:42.787 "base_bdevs_list": [ 00:17:42.787 { 00:17:42.787 "name": "BaseBdev1", 00:17:42.787 "uuid": "18d1cdeb-6587-45ab-a2ae-df181e83ad99", 00:17:42.787 "is_configured": true, 00:17:42.787 "data_offset": 2048, 00:17:42.787 "data_size": 63488 00:17:42.787 }, 00:17:42.787 { 00:17:42.787 "name": null, 00:17:42.787 "uuid": "d0d11973-f06e-46fe-83c9-a5bf6bffd22c", 00:17:42.787 "is_configured": false, 00:17:42.787 "data_offset": 0, 00:17:42.787 "data_size": 63488 00:17:42.787 }, 00:17:42.787 { 00:17:42.787 "name": "BaseBdev3", 00:17:42.787 "uuid": "b1267546-ae0a-4efe-9e4c-1a36fd70ce8b", 00:17:42.787 "is_configured": true, 00:17:42.787 "data_offset": 2048, 00:17:42.787 "data_size": 63488 00:17:42.787 }, 00:17:42.787 { 00:17:42.787 "name": "BaseBdev4", 00:17:42.787 "uuid": "6f037824-23a8-49e6-a1af-8abe25c0b62f", 00:17:42.787 "is_configured": true, 00:17:42.787 "data_offset": 2048, 00:17:42.787 "data_size": 63488 00:17:42.787 } 00:17:42.787 ] 00:17:42.787 }' 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.787 11:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.046 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.046 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.047 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.047 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.306 [2024-11-05 11:33:42.364927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.306 "name": "Existed_Raid", 00:17:43.306 "uuid": "6169ec01-a21d-4bf3-aa5d-e09409a406a8", 00:17:43.306 "strip_size_kb": 64, 00:17:43.306 "state": "configuring", 00:17:43.306 "raid_level": "raid5f", 00:17:43.306 "superblock": true, 00:17:43.306 "num_base_bdevs": 4, 00:17:43.306 "num_base_bdevs_discovered": 2, 00:17:43.306 "num_base_bdevs_operational": 4, 00:17:43.306 "base_bdevs_list": [ 00:17:43.306 { 00:17:43.306 "name": null, 00:17:43.306 "uuid": "18d1cdeb-6587-45ab-a2ae-df181e83ad99", 00:17:43.306 "is_configured": false, 00:17:43.306 "data_offset": 0, 00:17:43.306 "data_size": 63488 00:17:43.306 }, 00:17:43.306 { 00:17:43.306 "name": null, 00:17:43.306 "uuid": "d0d11973-f06e-46fe-83c9-a5bf6bffd22c", 00:17:43.306 "is_configured": false, 00:17:43.306 "data_offset": 0, 00:17:43.306 "data_size": 63488 00:17:43.306 }, 00:17:43.306 { 00:17:43.306 "name": "BaseBdev3", 00:17:43.306 "uuid": "b1267546-ae0a-4efe-9e4c-1a36fd70ce8b", 00:17:43.306 "is_configured": true, 00:17:43.306 "data_offset": 2048, 00:17:43.306 "data_size": 63488 00:17:43.306 }, 00:17:43.306 { 00:17:43.306 "name": "BaseBdev4", 00:17:43.306 "uuid": "6f037824-23a8-49e6-a1af-8abe25c0b62f", 00:17:43.306 "is_configured": true, 00:17:43.306 "data_offset": 2048, 00:17:43.306 "data_size": 63488 00:17:43.306 } 00:17:43.306 ] 00:17:43.306 }' 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.306 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.875 [2024-11-05 11:33:42.945848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.875 11:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.875 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.875 "name": "Existed_Raid", 00:17:43.875 "uuid": "6169ec01-a21d-4bf3-aa5d-e09409a406a8", 00:17:43.875 "strip_size_kb": 64, 00:17:43.875 "state": "configuring", 00:17:43.875 "raid_level": "raid5f", 00:17:43.875 "superblock": true, 00:17:43.875 "num_base_bdevs": 4, 00:17:43.875 "num_base_bdevs_discovered": 3, 00:17:43.875 "num_base_bdevs_operational": 4, 00:17:43.875 "base_bdevs_list": [ 00:17:43.875 { 00:17:43.875 "name": null, 00:17:43.875 "uuid": "18d1cdeb-6587-45ab-a2ae-df181e83ad99", 00:17:43.875 "is_configured": false, 00:17:43.875 "data_offset": 0, 00:17:43.876 "data_size": 63488 00:17:43.876 }, 00:17:43.876 { 00:17:43.876 "name": "BaseBdev2", 00:17:43.876 "uuid": "d0d11973-f06e-46fe-83c9-a5bf6bffd22c", 00:17:43.876 "is_configured": true, 00:17:43.876 "data_offset": 2048, 00:17:43.876 "data_size": 63488 00:17:43.876 }, 00:17:43.876 { 00:17:43.876 "name": "BaseBdev3", 00:17:43.876 "uuid": "b1267546-ae0a-4efe-9e4c-1a36fd70ce8b", 00:17:43.876 "is_configured": true, 00:17:43.876 "data_offset": 2048, 00:17:43.876 "data_size": 63488 00:17:43.876 }, 00:17:43.876 { 00:17:43.876 "name": "BaseBdev4", 00:17:43.876 "uuid": "6f037824-23a8-49e6-a1af-8abe25c0b62f", 00:17:43.876 "is_configured": true, 00:17:43.876 "data_offset": 2048, 00:17:43.876 "data_size": 63488 00:17:43.876 } 00:17:43.876 ] 00:17:43.876 }' 00:17:43.876 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.876 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.135 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.135 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.135 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.135 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 18d1cdeb-6587-45ab-a2ae-df181e83ad99 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.395 [2024-11-05 11:33:43.536403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:44.395 [2024-11-05 11:33:43.536623] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:44.395 [2024-11-05 11:33:43.536635] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:44.395 [2024-11-05 11:33:43.536857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:44.395 NewBaseBdev 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.395 [2024-11-05 11:33:43.543660] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:44.395 [2024-11-05 11:33:43.543725] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:44.395 [2024-11-05 11:33:43.543924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.395 [ 00:17:44.395 { 00:17:44.395 "name": "NewBaseBdev", 00:17:44.395 "aliases": [ 00:17:44.395 "18d1cdeb-6587-45ab-a2ae-df181e83ad99" 00:17:44.395 ], 00:17:44.395 "product_name": "Malloc disk", 00:17:44.395 "block_size": 512, 00:17:44.395 "num_blocks": 65536, 00:17:44.395 "uuid": "18d1cdeb-6587-45ab-a2ae-df181e83ad99", 00:17:44.395 "assigned_rate_limits": { 00:17:44.395 "rw_ios_per_sec": 0, 00:17:44.395 "rw_mbytes_per_sec": 0, 00:17:44.395 "r_mbytes_per_sec": 0, 00:17:44.395 "w_mbytes_per_sec": 0 00:17:44.395 }, 00:17:44.395 "claimed": true, 00:17:44.395 "claim_type": "exclusive_write", 00:17:44.395 "zoned": false, 00:17:44.395 "supported_io_types": { 00:17:44.395 "read": true, 00:17:44.395 "write": true, 00:17:44.395 "unmap": true, 00:17:44.395 "flush": true, 00:17:44.395 "reset": true, 00:17:44.395 "nvme_admin": false, 00:17:44.395 "nvme_io": false, 00:17:44.395 "nvme_io_md": false, 00:17:44.395 "write_zeroes": true, 00:17:44.395 "zcopy": true, 00:17:44.395 "get_zone_info": false, 00:17:44.395 "zone_management": false, 00:17:44.395 "zone_append": false, 00:17:44.395 "compare": false, 00:17:44.395 "compare_and_write": false, 00:17:44.395 "abort": true, 00:17:44.395 "seek_hole": false, 00:17:44.395 "seek_data": false, 00:17:44.395 "copy": true, 00:17:44.395 "nvme_iov_md": false 00:17:44.395 }, 00:17:44.395 "memory_domains": [ 00:17:44.395 { 00:17:44.395 "dma_device_id": "system", 00:17:44.395 "dma_device_type": 1 00:17:44.395 }, 00:17:44.395 { 00:17:44.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.395 "dma_device_type": 2 00:17:44.395 } 00:17:44.395 ], 00:17:44.395 "driver_specific": {} 00:17:44.395 } 00:17:44.395 ] 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.395 "name": "Existed_Raid", 00:17:44.395 "uuid": "6169ec01-a21d-4bf3-aa5d-e09409a406a8", 00:17:44.395 "strip_size_kb": 64, 00:17:44.395 "state": "online", 00:17:44.395 "raid_level": "raid5f", 00:17:44.395 "superblock": true, 00:17:44.395 "num_base_bdevs": 4, 00:17:44.395 "num_base_bdevs_discovered": 4, 00:17:44.395 "num_base_bdevs_operational": 4, 00:17:44.395 "base_bdevs_list": [ 00:17:44.395 { 00:17:44.395 "name": "NewBaseBdev", 00:17:44.395 "uuid": "18d1cdeb-6587-45ab-a2ae-df181e83ad99", 00:17:44.395 "is_configured": true, 00:17:44.395 "data_offset": 2048, 00:17:44.395 "data_size": 63488 00:17:44.395 }, 00:17:44.395 { 00:17:44.395 "name": "BaseBdev2", 00:17:44.395 "uuid": "d0d11973-f06e-46fe-83c9-a5bf6bffd22c", 00:17:44.395 "is_configured": true, 00:17:44.395 "data_offset": 2048, 00:17:44.395 "data_size": 63488 00:17:44.395 }, 00:17:44.395 { 00:17:44.395 "name": "BaseBdev3", 00:17:44.395 "uuid": "b1267546-ae0a-4efe-9e4c-1a36fd70ce8b", 00:17:44.395 "is_configured": true, 00:17:44.395 "data_offset": 2048, 00:17:44.395 "data_size": 63488 00:17:44.395 }, 00:17:44.395 { 00:17:44.395 "name": "BaseBdev4", 00:17:44.395 "uuid": "6f037824-23a8-49e6-a1af-8abe25c0b62f", 00:17:44.395 "is_configured": true, 00:17:44.395 "data_offset": 2048, 00:17:44.395 "data_size": 63488 00:17:44.395 } 00:17:44.395 ] 00:17:44.395 }' 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.395 11:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.964 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:44.964 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:44.964 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:44.964 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:44.964 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:44.964 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:44.964 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:44.964 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:44.964 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.964 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.964 [2024-11-05 11:33:44.055327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.964 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.964 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:44.964 "name": "Existed_Raid", 00:17:44.964 "aliases": [ 00:17:44.964 "6169ec01-a21d-4bf3-aa5d-e09409a406a8" 00:17:44.964 ], 00:17:44.964 "product_name": "Raid Volume", 00:17:44.964 "block_size": 512, 00:17:44.964 "num_blocks": 190464, 00:17:44.964 "uuid": "6169ec01-a21d-4bf3-aa5d-e09409a406a8", 00:17:44.964 "assigned_rate_limits": { 00:17:44.964 "rw_ios_per_sec": 0, 00:17:44.964 "rw_mbytes_per_sec": 0, 00:17:44.964 "r_mbytes_per_sec": 0, 00:17:44.964 "w_mbytes_per_sec": 0 00:17:44.964 }, 00:17:44.964 "claimed": false, 00:17:44.964 "zoned": false, 00:17:44.964 "supported_io_types": { 00:17:44.964 "read": true, 00:17:44.964 "write": true, 00:17:44.964 "unmap": false, 00:17:44.964 "flush": false, 00:17:44.964 "reset": true, 00:17:44.964 "nvme_admin": false, 00:17:44.964 "nvme_io": false, 00:17:44.964 "nvme_io_md": false, 00:17:44.964 "write_zeroes": true, 00:17:44.964 "zcopy": false, 00:17:44.964 "get_zone_info": false, 00:17:44.964 "zone_management": false, 00:17:44.964 "zone_append": false, 00:17:44.964 "compare": false, 00:17:44.964 "compare_and_write": false, 00:17:44.964 "abort": false, 00:17:44.964 "seek_hole": false, 00:17:44.964 "seek_data": false, 00:17:44.964 "copy": false, 00:17:44.964 "nvme_iov_md": false 00:17:44.964 }, 00:17:44.964 "driver_specific": { 00:17:44.964 "raid": { 00:17:44.964 "uuid": "6169ec01-a21d-4bf3-aa5d-e09409a406a8", 00:17:44.964 "strip_size_kb": 64, 00:17:44.964 "state": "online", 00:17:44.964 "raid_level": "raid5f", 00:17:44.964 "superblock": true, 00:17:44.964 "num_base_bdevs": 4, 00:17:44.964 "num_base_bdevs_discovered": 4, 00:17:44.964 "num_base_bdevs_operational": 4, 00:17:44.964 "base_bdevs_list": [ 00:17:44.964 { 00:17:44.964 "name": "NewBaseBdev", 00:17:44.964 "uuid": "18d1cdeb-6587-45ab-a2ae-df181e83ad99", 00:17:44.964 "is_configured": true, 00:17:44.964 "data_offset": 2048, 00:17:44.964 "data_size": 63488 00:17:44.964 }, 00:17:44.964 { 00:17:44.964 "name": "BaseBdev2", 00:17:44.964 "uuid": "d0d11973-f06e-46fe-83c9-a5bf6bffd22c", 00:17:44.964 "is_configured": true, 00:17:44.964 "data_offset": 2048, 00:17:44.964 "data_size": 63488 00:17:44.964 }, 00:17:44.964 { 00:17:44.964 "name": "BaseBdev3", 00:17:44.964 "uuid": "b1267546-ae0a-4efe-9e4c-1a36fd70ce8b", 00:17:44.964 "is_configured": true, 00:17:44.964 "data_offset": 2048, 00:17:44.964 "data_size": 63488 00:17:44.964 }, 00:17:44.964 { 00:17:44.964 "name": "BaseBdev4", 00:17:44.964 "uuid": "6f037824-23a8-49e6-a1af-8abe25c0b62f", 00:17:44.964 "is_configured": true, 00:17:44.964 "data_offset": 2048, 00:17:44.964 "data_size": 63488 00:17:44.964 } 00:17:44.964 ] 00:17:44.964 } 00:17:44.964 } 00:17:44.964 }' 00:17:44.964 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:44.964 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:44.964 BaseBdev2 00:17:44.964 BaseBdev3 00:17:44.964 BaseBdev4' 00:17:44.965 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.965 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:44.965 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:44.965 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:44.965 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.965 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.965 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.965 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.224 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.225 [2024-11-05 11:33:44.402474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:45.225 [2024-11-05 11:33:44.402542] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.225 [2024-11-05 11:33:44.402614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.225 [2024-11-05 11:33:44.402895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.225 [2024-11-05 11:33:44.402905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83471 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 83471 ']' 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 83471 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83471 00:17:45.225 killing process with pid 83471 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83471' 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 83471 00:17:45.225 [2024-11-05 11:33:44.452495] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:45.225 11:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 83471 00:17:45.793 [2024-11-05 11:33:44.820196] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:46.732 11:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:46.732 00:17:46.732 real 0m11.704s 00:17:46.732 user 0m18.654s 00:17:46.732 sys 0m2.248s 00:17:46.732 11:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:46.732 11:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.732 ************************************ 00:17:46.732 END TEST raid5f_state_function_test_sb 00:17:46.732 ************************************ 00:17:46.732 11:33:45 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:46.732 11:33:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:46.732 11:33:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:46.732 11:33:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:46.732 ************************************ 00:17:46.732 START TEST raid5f_superblock_test 00:17:46.732 ************************************ 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84137 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84137 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 84137 ']' 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:46.732 11:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.992 [2024-11-05 11:33:46.028879] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:17:46.992 [2024-11-05 11:33:46.029057] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84137 ] 00:17:46.992 [2024-11-05 11:33:46.204955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.251 [2024-11-05 11:33:46.316703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.251 [2024-11-05 11:33:46.507810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.251 [2024-11-05 11:33:46.507926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.820 malloc1 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.820 [2024-11-05 11:33:46.909053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:47.820 [2024-11-05 11:33:46.909163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.820 [2024-11-05 11:33:46.909203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:47.820 [2024-11-05 11:33:46.909232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.820 [2024-11-05 11:33:46.911291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.820 [2024-11-05 11:33:46.911361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:47.820 pt1 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.820 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.821 malloc2 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.821 [2024-11-05 11:33:46.966189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:47.821 [2024-11-05 11:33:46.966235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.821 [2024-11-05 11:33:46.966252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:47.821 [2024-11-05 11:33:46.966260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.821 [2024-11-05 11:33:46.968236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.821 [2024-11-05 11:33:46.968318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:47.821 pt2 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.821 11:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.821 malloc3 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.821 [2024-11-05 11:33:47.053249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:47.821 [2024-11-05 11:33:47.053348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.821 [2024-11-05 11:33:47.053384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:47.821 [2024-11-05 11:33:47.053411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.821 [2024-11-05 11:33:47.055486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.821 [2024-11-05 11:33:47.055559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:47.821 pt3 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.821 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.081 malloc4 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.081 [2024-11-05 11:33:47.110862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:48.081 [2024-11-05 11:33:47.110961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.081 [2024-11-05 11:33:47.110994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:48.081 [2024-11-05 11:33:47.111020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.081 [2024-11-05 11:33:47.113037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.081 [2024-11-05 11:33:47.113104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:48.081 pt4 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.081 [2024-11-05 11:33:47.122876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:48.081 [2024-11-05 11:33:47.124758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:48.081 [2024-11-05 11:33:47.124859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:48.081 [2024-11-05 11:33:47.124938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:48.081 [2024-11-05 11:33:47.125177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:48.081 [2024-11-05 11:33:47.125228] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:48.081 [2024-11-05 11:33:47.125476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:48.081 [2024-11-05 11:33:47.132366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:48.081 [2024-11-05 11:33:47.132421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:48.081 [2024-11-05 11:33:47.132641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.081 "name": "raid_bdev1", 00:17:48.081 "uuid": "e058ebd2-6d51-4d44-b011-c9262373ed21", 00:17:48.081 "strip_size_kb": 64, 00:17:48.081 "state": "online", 00:17:48.081 "raid_level": "raid5f", 00:17:48.081 "superblock": true, 00:17:48.081 "num_base_bdevs": 4, 00:17:48.081 "num_base_bdevs_discovered": 4, 00:17:48.081 "num_base_bdevs_operational": 4, 00:17:48.081 "base_bdevs_list": [ 00:17:48.081 { 00:17:48.081 "name": "pt1", 00:17:48.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.081 "is_configured": true, 00:17:48.081 "data_offset": 2048, 00:17:48.081 "data_size": 63488 00:17:48.081 }, 00:17:48.081 { 00:17:48.081 "name": "pt2", 00:17:48.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.081 "is_configured": true, 00:17:48.081 "data_offset": 2048, 00:17:48.081 "data_size": 63488 00:17:48.081 }, 00:17:48.081 { 00:17:48.081 "name": "pt3", 00:17:48.081 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:48.081 "is_configured": true, 00:17:48.081 "data_offset": 2048, 00:17:48.081 "data_size": 63488 00:17:48.081 }, 00:17:48.081 { 00:17:48.081 "name": "pt4", 00:17:48.081 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:48.081 "is_configured": true, 00:17:48.081 "data_offset": 2048, 00:17:48.081 "data_size": 63488 00:17:48.081 } 00:17:48.081 ] 00:17:48.081 }' 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.081 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.340 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:48.340 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:48.340 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:48.340 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:48.340 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:48.340 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:48.340 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:48.340 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.340 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.340 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:48.340 [2024-11-05 11:33:47.584141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.340 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:48.600 "name": "raid_bdev1", 00:17:48.600 "aliases": [ 00:17:48.600 "e058ebd2-6d51-4d44-b011-c9262373ed21" 00:17:48.600 ], 00:17:48.600 "product_name": "Raid Volume", 00:17:48.600 "block_size": 512, 00:17:48.600 "num_blocks": 190464, 00:17:48.600 "uuid": "e058ebd2-6d51-4d44-b011-c9262373ed21", 00:17:48.600 "assigned_rate_limits": { 00:17:48.600 "rw_ios_per_sec": 0, 00:17:48.600 "rw_mbytes_per_sec": 0, 00:17:48.600 "r_mbytes_per_sec": 0, 00:17:48.600 "w_mbytes_per_sec": 0 00:17:48.600 }, 00:17:48.600 "claimed": false, 00:17:48.600 "zoned": false, 00:17:48.600 "supported_io_types": { 00:17:48.600 "read": true, 00:17:48.600 "write": true, 00:17:48.600 "unmap": false, 00:17:48.600 "flush": false, 00:17:48.600 "reset": true, 00:17:48.600 "nvme_admin": false, 00:17:48.600 "nvme_io": false, 00:17:48.600 "nvme_io_md": false, 00:17:48.600 "write_zeroes": true, 00:17:48.600 "zcopy": false, 00:17:48.600 "get_zone_info": false, 00:17:48.600 "zone_management": false, 00:17:48.600 "zone_append": false, 00:17:48.600 "compare": false, 00:17:48.600 "compare_and_write": false, 00:17:48.600 "abort": false, 00:17:48.600 "seek_hole": false, 00:17:48.600 "seek_data": false, 00:17:48.600 "copy": false, 00:17:48.600 "nvme_iov_md": false 00:17:48.600 }, 00:17:48.600 "driver_specific": { 00:17:48.600 "raid": { 00:17:48.600 "uuid": "e058ebd2-6d51-4d44-b011-c9262373ed21", 00:17:48.600 "strip_size_kb": 64, 00:17:48.600 "state": "online", 00:17:48.600 "raid_level": "raid5f", 00:17:48.600 "superblock": true, 00:17:48.600 "num_base_bdevs": 4, 00:17:48.600 "num_base_bdevs_discovered": 4, 00:17:48.600 "num_base_bdevs_operational": 4, 00:17:48.600 "base_bdevs_list": [ 00:17:48.600 { 00:17:48.600 "name": "pt1", 00:17:48.600 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.600 "is_configured": true, 00:17:48.600 "data_offset": 2048, 00:17:48.600 "data_size": 63488 00:17:48.600 }, 00:17:48.600 { 00:17:48.600 "name": "pt2", 00:17:48.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.600 "is_configured": true, 00:17:48.600 "data_offset": 2048, 00:17:48.600 "data_size": 63488 00:17:48.600 }, 00:17:48.600 { 00:17:48.600 "name": "pt3", 00:17:48.600 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:48.600 "is_configured": true, 00:17:48.600 "data_offset": 2048, 00:17:48.600 "data_size": 63488 00:17:48.600 }, 00:17:48.600 { 00:17:48.600 "name": "pt4", 00:17:48.600 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:48.600 "is_configured": true, 00:17:48.600 "data_offset": 2048, 00:17:48.600 "data_size": 63488 00:17:48.600 } 00:17:48.600 ] 00:17:48.600 } 00:17:48.600 } 00:17:48.600 }' 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:48.600 pt2 00:17:48.600 pt3 00:17:48.600 pt4' 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.600 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.860 [2024-11-05 11:33:47.923489] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e058ebd2-6d51-4d44-b011-c9262373ed21 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e058ebd2-6d51-4d44-b011-c9262373ed21 ']' 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.860 [2024-11-05 11:33:47.967274] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.860 [2024-11-05 11:33:47.967295] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.860 [2024-11-05 11:33:47.967362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.860 [2024-11-05 11:33:47.967439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.860 [2024-11-05 11:33:47.967452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.860 11:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.860 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:48.860 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:48.860 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:48.860 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:48.860 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.861 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.120 [2024-11-05 11:33:48.135009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:49.120 [2024-11-05 11:33:48.136974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:49.120 [2024-11-05 11:33:48.137022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:49.120 [2024-11-05 11:33:48.137052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:49.120 [2024-11-05 11:33:48.137096] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:49.121 [2024-11-05 11:33:48.137148] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:49.121 [2024-11-05 11:33:48.137166] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:49.121 [2024-11-05 11:33:48.137183] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:49.121 [2024-11-05 11:33:48.137194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.121 [2024-11-05 11:33:48.137204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:49.121 request: 00:17:49.121 { 00:17:49.121 "name": "raid_bdev1", 00:17:49.121 "raid_level": "raid5f", 00:17:49.121 "base_bdevs": [ 00:17:49.121 "malloc1", 00:17:49.121 "malloc2", 00:17:49.121 "malloc3", 00:17:49.121 "malloc4" 00:17:49.121 ], 00:17:49.121 "strip_size_kb": 64, 00:17:49.121 "superblock": false, 00:17:49.121 "method": "bdev_raid_create", 00:17:49.121 "req_id": 1 00:17:49.121 } 00:17:49.121 Got JSON-RPC error response 00:17:49.121 response: 00:17:49.121 { 00:17:49.121 "code": -17, 00:17:49.121 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:49.121 } 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.121 [2024-11-05 11:33:48.202854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:49.121 [2024-11-05 11:33:48.202901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.121 [2024-11-05 11:33:48.202915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:49.121 [2024-11-05 11:33:48.202925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.121 [2024-11-05 11:33:48.205010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.121 [2024-11-05 11:33:48.205098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:49.121 [2024-11-05 11:33:48.205179] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:49.121 [2024-11-05 11:33:48.205243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:49.121 pt1 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.121 "name": "raid_bdev1", 00:17:49.121 "uuid": "e058ebd2-6d51-4d44-b011-c9262373ed21", 00:17:49.121 "strip_size_kb": 64, 00:17:49.121 "state": "configuring", 00:17:49.121 "raid_level": "raid5f", 00:17:49.121 "superblock": true, 00:17:49.121 "num_base_bdevs": 4, 00:17:49.121 "num_base_bdevs_discovered": 1, 00:17:49.121 "num_base_bdevs_operational": 4, 00:17:49.121 "base_bdevs_list": [ 00:17:49.121 { 00:17:49.121 "name": "pt1", 00:17:49.121 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.121 "is_configured": true, 00:17:49.121 "data_offset": 2048, 00:17:49.121 "data_size": 63488 00:17:49.121 }, 00:17:49.121 { 00:17:49.121 "name": null, 00:17:49.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.121 "is_configured": false, 00:17:49.121 "data_offset": 2048, 00:17:49.121 "data_size": 63488 00:17:49.121 }, 00:17:49.121 { 00:17:49.121 "name": null, 00:17:49.121 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:49.121 "is_configured": false, 00:17:49.121 "data_offset": 2048, 00:17:49.121 "data_size": 63488 00:17:49.121 }, 00:17:49.121 { 00:17:49.121 "name": null, 00:17:49.121 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:49.121 "is_configured": false, 00:17:49.121 "data_offset": 2048, 00:17:49.121 "data_size": 63488 00:17:49.121 } 00:17:49.121 ] 00:17:49.121 }' 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.121 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.380 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:49.639 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:49.639 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.640 [2024-11-05 11:33:48.662087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:49.640 [2024-11-05 11:33:48.662221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.640 [2024-11-05 11:33:48.662257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:49.640 [2024-11-05 11:33:48.662286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.640 [2024-11-05 11:33:48.662699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.640 [2024-11-05 11:33:48.662760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:49.640 [2024-11-05 11:33:48.662858] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:49.640 [2024-11-05 11:33:48.662910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:49.640 pt2 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.640 [2024-11-05 11:33:48.674070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.640 "name": "raid_bdev1", 00:17:49.640 "uuid": "e058ebd2-6d51-4d44-b011-c9262373ed21", 00:17:49.640 "strip_size_kb": 64, 00:17:49.640 "state": "configuring", 00:17:49.640 "raid_level": "raid5f", 00:17:49.640 "superblock": true, 00:17:49.640 "num_base_bdevs": 4, 00:17:49.640 "num_base_bdevs_discovered": 1, 00:17:49.640 "num_base_bdevs_operational": 4, 00:17:49.640 "base_bdevs_list": [ 00:17:49.640 { 00:17:49.640 "name": "pt1", 00:17:49.640 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.640 "is_configured": true, 00:17:49.640 "data_offset": 2048, 00:17:49.640 "data_size": 63488 00:17:49.640 }, 00:17:49.640 { 00:17:49.640 "name": null, 00:17:49.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.640 "is_configured": false, 00:17:49.640 "data_offset": 0, 00:17:49.640 "data_size": 63488 00:17:49.640 }, 00:17:49.640 { 00:17:49.640 "name": null, 00:17:49.640 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:49.640 "is_configured": false, 00:17:49.640 "data_offset": 2048, 00:17:49.640 "data_size": 63488 00:17:49.640 }, 00:17:49.640 { 00:17:49.640 "name": null, 00:17:49.640 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:49.640 "is_configured": false, 00:17:49.640 "data_offset": 2048, 00:17:49.640 "data_size": 63488 00:17:49.640 } 00:17:49.640 ] 00:17:49.640 }' 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.640 11:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.900 [2024-11-05 11:33:49.125251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:49.900 [2024-11-05 11:33:49.125336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.900 [2024-11-05 11:33:49.125369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:49.900 [2024-11-05 11:33:49.125395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.900 [2024-11-05 11:33:49.125783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.900 [2024-11-05 11:33:49.125840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:49.900 [2024-11-05 11:33:49.125933] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:49.900 [2024-11-05 11:33:49.125980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:49.900 pt2 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.900 [2024-11-05 11:33:49.137231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:49.900 [2024-11-05 11:33:49.137272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.900 [2024-11-05 11:33:49.137287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:49.900 [2024-11-05 11:33:49.137295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.900 [2024-11-05 11:33:49.137603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.900 [2024-11-05 11:33:49.137619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:49.900 [2024-11-05 11:33:49.137670] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:49.900 [2024-11-05 11:33:49.137685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:49.900 pt3 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.900 [2024-11-05 11:33:49.149204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:49.900 [2024-11-05 11:33:49.149244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.900 [2024-11-05 11:33:49.149260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:49.900 [2024-11-05 11:33:49.149267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.900 [2024-11-05 11:33:49.149583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.900 [2024-11-05 11:33:49.149598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:49.900 [2024-11-05 11:33:49.149646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:49.900 [2024-11-05 11:33:49.149662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:49.900 [2024-11-05 11:33:49.149784] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:49.900 [2024-11-05 11:33:49.149791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:49.900 [2024-11-05 11:33:49.150000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:49.900 [2024-11-05 11:33:49.156603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:49.900 [2024-11-05 11:33:49.156626] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:49.900 [2024-11-05 11:33:49.156776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.900 pt4 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.900 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.159 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.159 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.159 "name": "raid_bdev1", 00:17:50.159 "uuid": "e058ebd2-6d51-4d44-b011-c9262373ed21", 00:17:50.159 "strip_size_kb": 64, 00:17:50.159 "state": "online", 00:17:50.159 "raid_level": "raid5f", 00:17:50.159 "superblock": true, 00:17:50.159 "num_base_bdevs": 4, 00:17:50.159 "num_base_bdevs_discovered": 4, 00:17:50.159 "num_base_bdevs_operational": 4, 00:17:50.159 "base_bdevs_list": [ 00:17:50.159 { 00:17:50.159 "name": "pt1", 00:17:50.159 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.159 "is_configured": true, 00:17:50.159 "data_offset": 2048, 00:17:50.159 "data_size": 63488 00:17:50.159 }, 00:17:50.159 { 00:17:50.159 "name": "pt2", 00:17:50.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.159 "is_configured": true, 00:17:50.159 "data_offset": 2048, 00:17:50.159 "data_size": 63488 00:17:50.159 }, 00:17:50.159 { 00:17:50.159 "name": "pt3", 00:17:50.159 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:50.159 "is_configured": true, 00:17:50.159 "data_offset": 2048, 00:17:50.159 "data_size": 63488 00:17:50.159 }, 00:17:50.159 { 00:17:50.159 "name": "pt4", 00:17:50.159 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:50.159 "is_configured": true, 00:17:50.159 "data_offset": 2048, 00:17:50.159 "data_size": 63488 00:17:50.159 } 00:17:50.159 ] 00:17:50.159 }' 00:17:50.159 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.159 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.421 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:50.421 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:50.421 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:50.421 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:50.421 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:50.421 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:50.421 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.421 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:50.421 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.421 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.421 [2024-11-05 11:33:49.620438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.421 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.421 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.421 "name": "raid_bdev1", 00:17:50.421 "aliases": [ 00:17:50.421 "e058ebd2-6d51-4d44-b011-c9262373ed21" 00:17:50.421 ], 00:17:50.421 "product_name": "Raid Volume", 00:17:50.421 "block_size": 512, 00:17:50.421 "num_blocks": 190464, 00:17:50.421 "uuid": "e058ebd2-6d51-4d44-b011-c9262373ed21", 00:17:50.421 "assigned_rate_limits": { 00:17:50.421 "rw_ios_per_sec": 0, 00:17:50.421 "rw_mbytes_per_sec": 0, 00:17:50.421 "r_mbytes_per_sec": 0, 00:17:50.421 "w_mbytes_per_sec": 0 00:17:50.421 }, 00:17:50.421 "claimed": false, 00:17:50.421 "zoned": false, 00:17:50.421 "supported_io_types": { 00:17:50.421 "read": true, 00:17:50.421 "write": true, 00:17:50.421 "unmap": false, 00:17:50.421 "flush": false, 00:17:50.421 "reset": true, 00:17:50.421 "nvme_admin": false, 00:17:50.421 "nvme_io": false, 00:17:50.421 "nvme_io_md": false, 00:17:50.421 "write_zeroes": true, 00:17:50.421 "zcopy": false, 00:17:50.421 "get_zone_info": false, 00:17:50.421 "zone_management": false, 00:17:50.421 "zone_append": false, 00:17:50.421 "compare": false, 00:17:50.421 "compare_and_write": false, 00:17:50.421 "abort": false, 00:17:50.421 "seek_hole": false, 00:17:50.421 "seek_data": false, 00:17:50.421 "copy": false, 00:17:50.421 "nvme_iov_md": false 00:17:50.421 }, 00:17:50.421 "driver_specific": { 00:17:50.421 "raid": { 00:17:50.421 "uuid": "e058ebd2-6d51-4d44-b011-c9262373ed21", 00:17:50.421 "strip_size_kb": 64, 00:17:50.421 "state": "online", 00:17:50.421 "raid_level": "raid5f", 00:17:50.421 "superblock": true, 00:17:50.421 "num_base_bdevs": 4, 00:17:50.421 "num_base_bdevs_discovered": 4, 00:17:50.421 "num_base_bdevs_operational": 4, 00:17:50.421 "base_bdevs_list": [ 00:17:50.421 { 00:17:50.421 "name": "pt1", 00:17:50.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.421 "is_configured": true, 00:17:50.421 "data_offset": 2048, 00:17:50.421 "data_size": 63488 00:17:50.421 }, 00:17:50.421 { 00:17:50.421 "name": "pt2", 00:17:50.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.421 "is_configured": true, 00:17:50.421 "data_offset": 2048, 00:17:50.421 "data_size": 63488 00:17:50.421 }, 00:17:50.421 { 00:17:50.421 "name": "pt3", 00:17:50.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:50.421 "is_configured": true, 00:17:50.421 "data_offset": 2048, 00:17:50.421 "data_size": 63488 00:17:50.421 }, 00:17:50.421 { 00:17:50.421 "name": "pt4", 00:17:50.421 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:50.421 "is_configured": true, 00:17:50.421 "data_offset": 2048, 00:17:50.421 "data_size": 63488 00:17:50.421 } 00:17:50.421 ] 00:17:50.421 } 00:17:50.421 } 00:17:50.421 }' 00:17:50.421 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:50.680 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:50.680 pt2 00:17:50.680 pt3 00:17:50.680 pt4' 00:17:50.680 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.680 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:50.680 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.680 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:50.680 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.680 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.681 [2024-11-05 11:33:49.931843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.681 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e058ebd2-6d51-4d44-b011-c9262373ed21 '!=' e058ebd2-6d51-4d44-b011-c9262373ed21 ']' 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.940 [2024-11-05 11:33:49.975650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.940 11:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.940 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.940 "name": "raid_bdev1", 00:17:50.940 "uuid": "e058ebd2-6d51-4d44-b011-c9262373ed21", 00:17:50.940 "strip_size_kb": 64, 00:17:50.940 "state": "online", 00:17:50.940 "raid_level": "raid5f", 00:17:50.940 "superblock": true, 00:17:50.940 "num_base_bdevs": 4, 00:17:50.940 "num_base_bdevs_discovered": 3, 00:17:50.940 "num_base_bdevs_operational": 3, 00:17:50.940 "base_bdevs_list": [ 00:17:50.940 { 00:17:50.940 "name": null, 00:17:50.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.940 "is_configured": false, 00:17:50.940 "data_offset": 0, 00:17:50.940 "data_size": 63488 00:17:50.940 }, 00:17:50.940 { 00:17:50.940 "name": "pt2", 00:17:50.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.941 "is_configured": true, 00:17:50.941 "data_offset": 2048, 00:17:50.941 "data_size": 63488 00:17:50.941 }, 00:17:50.941 { 00:17:50.941 "name": "pt3", 00:17:50.941 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:50.941 "is_configured": true, 00:17:50.941 "data_offset": 2048, 00:17:50.941 "data_size": 63488 00:17:50.941 }, 00:17:50.941 { 00:17:50.941 "name": "pt4", 00:17:50.941 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:50.941 "is_configured": true, 00:17:50.941 "data_offset": 2048, 00:17:50.941 "data_size": 63488 00:17:50.941 } 00:17:50.941 ] 00:17:50.941 }' 00:17:50.941 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.941 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.200 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:51.200 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.200 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.200 [2024-11-05 11:33:50.435001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.200 [2024-11-05 11:33:50.435026] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.200 [2024-11-05 11:33:50.435082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.200 [2024-11-05 11:33:50.435167] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.200 [2024-11-05 11:33:50.435177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:51.200 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.200 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.200 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:51.200 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.200 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.200 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.459 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:51.459 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:51.459 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:51.459 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:51.459 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:51.459 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.459 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.459 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.459 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:51.459 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:51.459 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:51.459 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.460 [2024-11-05 11:33:50.534840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:51.460 [2024-11-05 11:33:50.534886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.460 [2024-11-05 11:33:50.534903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:51.460 [2024-11-05 11:33:50.534911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.460 [2024-11-05 11:33:50.537238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.460 [2024-11-05 11:33:50.537281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:51.460 [2024-11-05 11:33:50.537351] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:51.460 [2024-11-05 11:33:50.537395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:51.460 pt2 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.460 "name": "raid_bdev1", 00:17:51.460 "uuid": "e058ebd2-6d51-4d44-b011-c9262373ed21", 00:17:51.460 "strip_size_kb": 64, 00:17:51.460 "state": "configuring", 00:17:51.460 "raid_level": "raid5f", 00:17:51.460 "superblock": true, 00:17:51.460 "num_base_bdevs": 4, 00:17:51.460 "num_base_bdevs_discovered": 1, 00:17:51.460 "num_base_bdevs_operational": 3, 00:17:51.460 "base_bdevs_list": [ 00:17:51.460 { 00:17:51.460 "name": null, 00:17:51.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.460 "is_configured": false, 00:17:51.460 "data_offset": 2048, 00:17:51.460 "data_size": 63488 00:17:51.460 }, 00:17:51.460 { 00:17:51.460 "name": "pt2", 00:17:51.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.460 "is_configured": true, 00:17:51.460 "data_offset": 2048, 00:17:51.460 "data_size": 63488 00:17:51.460 }, 00:17:51.460 { 00:17:51.460 "name": null, 00:17:51.460 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:51.460 "is_configured": false, 00:17:51.460 "data_offset": 2048, 00:17:51.460 "data_size": 63488 00:17:51.460 }, 00:17:51.460 { 00:17:51.460 "name": null, 00:17:51.460 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:51.460 "is_configured": false, 00:17:51.460 "data_offset": 2048, 00:17:51.460 "data_size": 63488 00:17:51.460 } 00:17:51.460 ] 00:17:51.460 }' 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.460 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.719 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:51.719 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:51.720 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:51.720 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.720 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.720 [2024-11-05 11:33:50.990094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:51.720 [2024-11-05 11:33:50.990211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.720 [2024-11-05 11:33:50.990265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:51.720 [2024-11-05 11:33:50.990295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.720 [2024-11-05 11:33:50.990728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.720 [2024-11-05 11:33:50.990789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:51.720 [2024-11-05 11:33:50.990898] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:51.720 [2024-11-05 11:33:50.990958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:51.720 pt3 00:17:51.720 11:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.979 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:51.979 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.979 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.979 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.979 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.979 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:51.979 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.979 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.979 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.979 11:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.979 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.979 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.979 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.979 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.979 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.979 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.979 "name": "raid_bdev1", 00:17:51.979 "uuid": "e058ebd2-6d51-4d44-b011-c9262373ed21", 00:17:51.979 "strip_size_kb": 64, 00:17:51.979 "state": "configuring", 00:17:51.979 "raid_level": "raid5f", 00:17:51.979 "superblock": true, 00:17:51.979 "num_base_bdevs": 4, 00:17:51.979 "num_base_bdevs_discovered": 2, 00:17:51.979 "num_base_bdevs_operational": 3, 00:17:51.979 "base_bdevs_list": [ 00:17:51.979 { 00:17:51.979 "name": null, 00:17:51.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.979 "is_configured": false, 00:17:51.979 "data_offset": 2048, 00:17:51.979 "data_size": 63488 00:17:51.979 }, 00:17:51.979 { 00:17:51.979 "name": "pt2", 00:17:51.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.979 "is_configured": true, 00:17:51.979 "data_offset": 2048, 00:17:51.979 "data_size": 63488 00:17:51.979 }, 00:17:51.979 { 00:17:51.979 "name": "pt3", 00:17:51.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:51.979 "is_configured": true, 00:17:51.979 "data_offset": 2048, 00:17:51.979 "data_size": 63488 00:17:51.979 }, 00:17:51.979 { 00:17:51.979 "name": null, 00:17:51.979 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:51.979 "is_configured": false, 00:17:51.979 "data_offset": 2048, 00:17:51.979 "data_size": 63488 00:17:51.979 } 00:17:51.979 ] 00:17:51.979 }' 00:17:51.979 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.979 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.239 [2024-11-05 11:33:51.429312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:52.239 [2024-11-05 11:33:51.429359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.239 [2024-11-05 11:33:51.429379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:52.239 [2024-11-05 11:33:51.429388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.239 [2024-11-05 11:33:51.429756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.239 [2024-11-05 11:33:51.429773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:52.239 [2024-11-05 11:33:51.429840] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:52.239 [2024-11-05 11:33:51.429860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:52.239 [2024-11-05 11:33:51.429980] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:52.239 [2024-11-05 11:33:51.429989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:52.239 [2024-11-05 11:33:51.430223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:52.239 [2024-11-05 11:33:51.437362] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:52.239 [2024-11-05 11:33:51.437388] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:52.239 [2024-11-05 11:33:51.437667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.239 pt4 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.239 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.239 "name": "raid_bdev1", 00:17:52.239 "uuid": "e058ebd2-6d51-4d44-b011-c9262373ed21", 00:17:52.239 "strip_size_kb": 64, 00:17:52.239 "state": "online", 00:17:52.239 "raid_level": "raid5f", 00:17:52.239 "superblock": true, 00:17:52.239 "num_base_bdevs": 4, 00:17:52.239 "num_base_bdevs_discovered": 3, 00:17:52.239 "num_base_bdevs_operational": 3, 00:17:52.239 "base_bdevs_list": [ 00:17:52.239 { 00:17:52.239 "name": null, 00:17:52.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.239 "is_configured": false, 00:17:52.239 "data_offset": 2048, 00:17:52.239 "data_size": 63488 00:17:52.239 }, 00:17:52.239 { 00:17:52.239 "name": "pt2", 00:17:52.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.239 "is_configured": true, 00:17:52.239 "data_offset": 2048, 00:17:52.239 "data_size": 63488 00:17:52.239 }, 00:17:52.239 { 00:17:52.239 "name": "pt3", 00:17:52.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:52.239 "is_configured": true, 00:17:52.239 "data_offset": 2048, 00:17:52.239 "data_size": 63488 00:17:52.239 }, 00:17:52.239 { 00:17:52.239 "name": "pt4", 00:17:52.239 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:52.239 "is_configured": true, 00:17:52.240 "data_offset": 2048, 00:17:52.240 "data_size": 63488 00:17:52.240 } 00:17:52.240 ] 00:17:52.240 }' 00:17:52.240 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.240 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.809 [2024-11-05 11:33:51.825494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.809 [2024-11-05 11:33:51.825520] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.809 [2024-11-05 11:33:51.825590] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.809 [2024-11-05 11:33:51.825658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.809 [2024-11-05 11:33:51.825669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.809 [2024-11-05 11:33:51.897369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:52.809 [2024-11-05 11:33:51.897472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.809 [2024-11-05 11:33:51.897516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:52.809 [2024-11-05 11:33:51.897549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.809 [2024-11-05 11:33:51.899755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.809 [2024-11-05 11:33:51.899846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:52.809 [2024-11-05 11:33:51.899930] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:52.809 [2024-11-05 11:33:51.899989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:52.809 [2024-11-05 11:33:51.900126] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:52.809 [2024-11-05 11:33:51.900160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.809 [2024-11-05 11:33:51.900174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:52.809 [2024-11-05 11:33:51.900239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:52.809 [2024-11-05 11:33:51.900342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:52.809 pt1 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.809 "name": "raid_bdev1", 00:17:52.809 "uuid": "e058ebd2-6d51-4d44-b011-c9262373ed21", 00:17:52.809 "strip_size_kb": 64, 00:17:52.809 "state": "configuring", 00:17:52.809 "raid_level": "raid5f", 00:17:52.809 "superblock": true, 00:17:52.809 "num_base_bdevs": 4, 00:17:52.809 "num_base_bdevs_discovered": 2, 00:17:52.809 "num_base_bdevs_operational": 3, 00:17:52.809 "base_bdevs_list": [ 00:17:52.809 { 00:17:52.809 "name": null, 00:17:52.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.809 "is_configured": false, 00:17:52.809 "data_offset": 2048, 00:17:52.809 "data_size": 63488 00:17:52.809 }, 00:17:52.809 { 00:17:52.809 "name": "pt2", 00:17:52.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.809 "is_configured": true, 00:17:52.809 "data_offset": 2048, 00:17:52.809 "data_size": 63488 00:17:52.809 }, 00:17:52.809 { 00:17:52.809 "name": "pt3", 00:17:52.809 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:52.809 "is_configured": true, 00:17:52.809 "data_offset": 2048, 00:17:52.809 "data_size": 63488 00:17:52.809 }, 00:17:52.809 { 00:17:52.809 "name": null, 00:17:52.809 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:52.809 "is_configured": false, 00:17:52.809 "data_offset": 2048, 00:17:52.809 "data_size": 63488 00:17:52.809 } 00:17:52.809 ] 00:17:52.809 }' 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.809 11:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.068 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:53.068 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.068 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.068 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:53.068 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.068 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:53.068 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:53.068 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.068 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.328 [2024-11-05 11:33:52.344612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:53.328 [2024-11-05 11:33:52.344702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.328 [2024-11-05 11:33:52.344742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:53.328 [2024-11-05 11:33:52.344772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.328 [2024-11-05 11:33:52.345175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.328 [2024-11-05 11:33:52.345240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:53.328 [2024-11-05 11:33:52.345349] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:53.328 [2024-11-05 11:33:52.345415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:53.328 [2024-11-05 11:33:52.345587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:53.328 [2024-11-05 11:33:52.345626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:53.328 [2024-11-05 11:33:52.345887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:53.328 [2024-11-05 11:33:52.353263] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:53.328 [2024-11-05 11:33:52.353322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:53.328 [2024-11-05 11:33:52.353598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.328 pt4 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.328 "name": "raid_bdev1", 00:17:53.328 "uuid": "e058ebd2-6d51-4d44-b011-c9262373ed21", 00:17:53.328 "strip_size_kb": 64, 00:17:53.328 "state": "online", 00:17:53.328 "raid_level": "raid5f", 00:17:53.328 "superblock": true, 00:17:53.328 "num_base_bdevs": 4, 00:17:53.328 "num_base_bdevs_discovered": 3, 00:17:53.328 "num_base_bdevs_operational": 3, 00:17:53.328 "base_bdevs_list": [ 00:17:53.328 { 00:17:53.328 "name": null, 00:17:53.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.328 "is_configured": false, 00:17:53.328 "data_offset": 2048, 00:17:53.328 "data_size": 63488 00:17:53.328 }, 00:17:53.328 { 00:17:53.328 "name": "pt2", 00:17:53.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.328 "is_configured": true, 00:17:53.328 "data_offset": 2048, 00:17:53.328 "data_size": 63488 00:17:53.328 }, 00:17:53.328 { 00:17:53.328 "name": "pt3", 00:17:53.328 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:53.328 "is_configured": true, 00:17:53.328 "data_offset": 2048, 00:17:53.328 "data_size": 63488 00:17:53.328 }, 00:17:53.328 { 00:17:53.328 "name": "pt4", 00:17:53.328 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:53.328 "is_configured": true, 00:17:53.328 "data_offset": 2048, 00:17:53.328 "data_size": 63488 00:17:53.328 } 00:17:53.328 ] 00:17:53.328 }' 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.328 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.588 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:53.588 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:53.588 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.588 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.588 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.588 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:53.588 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:53.588 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.588 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.588 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:53.588 [2024-11-05 11:33:52.825766] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.588 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.847 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e058ebd2-6d51-4d44-b011-c9262373ed21 '!=' e058ebd2-6d51-4d44-b011-c9262373ed21 ']' 00:17:53.847 11:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84137 00:17:53.847 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 84137 ']' 00:17:53.847 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 84137 00:17:53.847 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:17:53.848 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:53.848 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84137 00:17:53.848 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:53.848 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:53.848 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84137' 00:17:53.848 killing process with pid 84137 00:17:53.848 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 84137 00:17:53.848 [2024-11-05 11:33:52.912311] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.848 [2024-11-05 11:33:52.912444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.848 11:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 84137 00:17:53.848 [2024-11-05 11:33:52.912544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.848 [2024-11-05 11:33:52.912558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:54.107 [2024-11-05 11:33:53.281790] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.490 11:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:55.490 00:17:55.490 real 0m8.396s 00:17:55.490 user 0m13.134s 00:17:55.490 sys 0m1.653s 00:17:55.490 11:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:55.490 11:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.490 ************************************ 00:17:55.490 END TEST raid5f_superblock_test 00:17:55.490 ************************************ 00:17:55.490 11:33:54 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:55.490 11:33:54 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:55.490 11:33:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:55.490 11:33:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:55.490 11:33:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.490 ************************************ 00:17:55.490 START TEST raid5f_rebuild_test 00:17:55.490 ************************************ 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84624 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84624 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 84624 ']' 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:55.490 11:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.490 [2024-11-05 11:33:54.510686] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:17:55.490 [2024-11-05 11:33:54.510881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:55.490 Zero copy mechanism will not be used. 00:17:55.490 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84624 ] 00:17:55.490 [2024-11-05 11:33:54.684436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.750 [2024-11-05 11:33:54.790474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.750 [2024-11-05 11:33:54.983839] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.750 [2024-11-05 11:33:54.983969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.319 BaseBdev1_malloc 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.319 [2024-11-05 11:33:55.388281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:56.319 [2024-11-05 11:33:55.388343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.319 [2024-11-05 11:33:55.388365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:56.319 [2024-11-05 11:33:55.388375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.319 [2024-11-05 11:33:55.390596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.319 [2024-11-05 11:33:55.390634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:56.319 BaseBdev1 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.319 BaseBdev2_malloc 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.319 [2024-11-05 11:33:55.440116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:56.319 [2024-11-05 11:33:55.440177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.319 [2024-11-05 11:33:55.440194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:56.319 [2024-11-05 11:33:55.440206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.319 [2024-11-05 11:33:55.442182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.319 [2024-11-05 11:33:55.442267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:56.319 BaseBdev2 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.319 BaseBdev3_malloc 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.319 [2024-11-05 11:33:55.529124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:56.319 [2024-11-05 11:33:55.529181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.319 [2024-11-05 11:33:55.529200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:56.319 [2024-11-05 11:33:55.529211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.319 [2024-11-05 11:33:55.531812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.319 [2024-11-05 11:33:55.531854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:56.319 BaseBdev3 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.319 BaseBdev4_malloc 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.319 [2024-11-05 11:33:55.583438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:56.319 [2024-11-05 11:33:55.583530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.319 [2024-11-05 11:33:55.583555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:56.319 [2024-11-05 11:33:55.583566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.319 [2024-11-05 11:33:55.585635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.319 [2024-11-05 11:33:55.585675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:56.319 BaseBdev4 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.319 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.579 spare_malloc 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.579 spare_delay 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.579 [2024-11-05 11:33:55.649628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:56.579 [2024-11-05 11:33:55.649680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.579 [2024-11-05 11:33:55.649698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:56.579 [2024-11-05 11:33:55.649709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.579 [2024-11-05 11:33:55.651722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.579 [2024-11-05 11:33:55.651759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:56.579 spare 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.579 [2024-11-05 11:33:55.661663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.579 [2024-11-05 11:33:55.663477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.579 [2024-11-05 11:33:55.663538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:56.579 [2024-11-05 11:33:55.663587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:56.579 [2024-11-05 11:33:55.663668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:56.579 [2024-11-05 11:33:55.663678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:56.579 [2024-11-05 11:33:55.663902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:56.579 [2024-11-05 11:33:55.670838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:56.579 [2024-11-05 11:33:55.670857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:56.579 [2024-11-05 11:33:55.671025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.579 "name": "raid_bdev1", 00:17:56.579 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:17:56.579 "strip_size_kb": 64, 00:17:56.579 "state": "online", 00:17:56.579 "raid_level": "raid5f", 00:17:56.579 "superblock": false, 00:17:56.579 "num_base_bdevs": 4, 00:17:56.579 "num_base_bdevs_discovered": 4, 00:17:56.579 "num_base_bdevs_operational": 4, 00:17:56.579 "base_bdevs_list": [ 00:17:56.579 { 00:17:56.579 "name": "BaseBdev1", 00:17:56.579 "uuid": "7fa74f70-415f-5111-888d-6d5d81bb82ff", 00:17:56.579 "is_configured": true, 00:17:56.579 "data_offset": 0, 00:17:56.579 "data_size": 65536 00:17:56.579 }, 00:17:56.579 { 00:17:56.579 "name": "BaseBdev2", 00:17:56.579 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:17:56.579 "is_configured": true, 00:17:56.579 "data_offset": 0, 00:17:56.579 "data_size": 65536 00:17:56.579 }, 00:17:56.579 { 00:17:56.579 "name": "BaseBdev3", 00:17:56.579 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:17:56.579 "is_configured": true, 00:17:56.579 "data_offset": 0, 00:17:56.579 "data_size": 65536 00:17:56.579 }, 00:17:56.579 { 00:17:56.579 "name": "BaseBdev4", 00:17:56.579 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:17:56.579 "is_configured": true, 00:17:56.579 "data_offset": 0, 00:17:56.579 "data_size": 65536 00:17:56.579 } 00:17:56.579 ] 00:17:56.579 }' 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.579 11:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.147 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.147 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:57.147 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.147 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.147 [2024-11-05 11:33:56.154267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.147 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.147 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:57.147 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.147 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.147 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.147 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:57.147 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.147 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:57.147 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:57.147 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:57.148 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:57.148 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:57.148 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.148 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:57.148 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:57.148 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:57.148 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:57.148 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:57.148 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:57.148 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:57.148 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:57.407 [2024-11-05 11:33:56.429645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:57.407 /dev/nbd0 00:17:57.407 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:57.407 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:57.407 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:57.407 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:57.407 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:57.407 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:57.407 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:57.407 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:57.407 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:57.407 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:57.407 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:57.407 1+0 records in 00:17:57.408 1+0 records out 00:17:57.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369187 s, 11.1 MB/s 00:17:57.408 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.408 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:57.408 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.408 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:57.408 11:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:57.408 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:57.408 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:57.408 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:57.408 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:57.408 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:57.408 11:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:57.976 512+0 records in 00:17:57.976 512+0 records out 00:17:57.976 100663296 bytes (101 MB, 96 MiB) copied, 0.486372 s, 207 MB/s 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:57.976 [2024-11-05 11:33:57.200261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.976 [2024-11-05 11:33:57.230099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:57.976 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.977 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.977 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.977 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.977 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:57.977 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.977 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.977 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.977 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.977 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.977 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.977 11:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.977 11:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.236 11:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.236 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.236 "name": "raid_bdev1", 00:17:58.236 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:17:58.236 "strip_size_kb": 64, 00:17:58.236 "state": "online", 00:17:58.236 "raid_level": "raid5f", 00:17:58.236 "superblock": false, 00:17:58.236 "num_base_bdevs": 4, 00:17:58.236 "num_base_bdevs_discovered": 3, 00:17:58.236 "num_base_bdevs_operational": 3, 00:17:58.236 "base_bdevs_list": [ 00:17:58.236 { 00:17:58.236 "name": null, 00:17:58.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.236 "is_configured": false, 00:17:58.236 "data_offset": 0, 00:17:58.236 "data_size": 65536 00:17:58.236 }, 00:17:58.236 { 00:17:58.236 "name": "BaseBdev2", 00:17:58.236 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:17:58.236 "is_configured": true, 00:17:58.236 "data_offset": 0, 00:17:58.236 "data_size": 65536 00:17:58.236 }, 00:17:58.236 { 00:17:58.236 "name": "BaseBdev3", 00:17:58.236 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:17:58.236 "is_configured": true, 00:17:58.236 "data_offset": 0, 00:17:58.236 "data_size": 65536 00:17:58.236 }, 00:17:58.236 { 00:17:58.236 "name": "BaseBdev4", 00:17:58.236 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:17:58.236 "is_configured": true, 00:17:58.236 "data_offset": 0, 00:17:58.236 "data_size": 65536 00:17:58.236 } 00:17:58.236 ] 00:17:58.236 }' 00:17:58.236 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.236 11:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.495 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:58.495 11:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.495 11:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.495 [2024-11-05 11:33:57.665332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.495 [2024-11-05 11:33:57.680843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:58.495 11:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.495 11:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:58.495 [2024-11-05 11:33:57.689476] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.433 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.433 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.433 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.433 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.433 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.433 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.433 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.433 11:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.433 11:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.692 11:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.692 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.692 "name": "raid_bdev1", 00:17:59.692 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:17:59.692 "strip_size_kb": 64, 00:17:59.692 "state": "online", 00:17:59.692 "raid_level": "raid5f", 00:17:59.692 "superblock": false, 00:17:59.692 "num_base_bdevs": 4, 00:17:59.692 "num_base_bdevs_discovered": 4, 00:17:59.692 "num_base_bdevs_operational": 4, 00:17:59.692 "process": { 00:17:59.692 "type": "rebuild", 00:17:59.692 "target": "spare", 00:17:59.692 "progress": { 00:17:59.692 "blocks": 19200, 00:17:59.692 "percent": 9 00:17:59.692 } 00:17:59.692 }, 00:17:59.692 "base_bdevs_list": [ 00:17:59.692 { 00:17:59.692 "name": "spare", 00:17:59.692 "uuid": "d0100df5-6b2d-50cd-ba3d-abe0926dbeeb", 00:17:59.692 "is_configured": true, 00:17:59.692 "data_offset": 0, 00:17:59.692 "data_size": 65536 00:17:59.692 }, 00:17:59.692 { 00:17:59.692 "name": "BaseBdev2", 00:17:59.692 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:17:59.692 "is_configured": true, 00:17:59.692 "data_offset": 0, 00:17:59.692 "data_size": 65536 00:17:59.692 }, 00:17:59.692 { 00:17:59.692 "name": "BaseBdev3", 00:17:59.692 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:17:59.692 "is_configured": true, 00:17:59.692 "data_offset": 0, 00:17:59.692 "data_size": 65536 00:17:59.692 }, 00:17:59.692 { 00:17:59.692 "name": "BaseBdev4", 00:17:59.692 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:17:59.692 "is_configured": true, 00:17:59.692 "data_offset": 0, 00:17:59.692 "data_size": 65536 00:17:59.692 } 00:17:59.692 ] 00:17:59.692 }' 00:17:59.692 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.692 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.692 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.692 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.692 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:59.692 11:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.692 11:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.692 [2024-11-05 11:33:58.848039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.692 [2024-11-05 11:33:58.894858] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:59.692 [2024-11-05 11:33:58.894973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.692 [2024-11-05 11:33:58.894992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.692 [2024-11-05 11:33:58.895002] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:59.692 11:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.692 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:59.692 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.692 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.693 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.693 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.693 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:59.693 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.693 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.693 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.693 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.693 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.693 11:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.693 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.693 11:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.693 11:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.952 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.952 "name": "raid_bdev1", 00:17:59.952 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:17:59.952 "strip_size_kb": 64, 00:17:59.952 "state": "online", 00:17:59.952 "raid_level": "raid5f", 00:17:59.952 "superblock": false, 00:17:59.952 "num_base_bdevs": 4, 00:17:59.952 "num_base_bdevs_discovered": 3, 00:17:59.952 "num_base_bdevs_operational": 3, 00:17:59.952 "base_bdevs_list": [ 00:17:59.952 { 00:17:59.952 "name": null, 00:17:59.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.952 "is_configured": false, 00:17:59.952 "data_offset": 0, 00:17:59.952 "data_size": 65536 00:17:59.952 }, 00:17:59.952 { 00:17:59.952 "name": "BaseBdev2", 00:17:59.952 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:17:59.952 "is_configured": true, 00:17:59.952 "data_offset": 0, 00:17:59.952 "data_size": 65536 00:17:59.952 }, 00:17:59.952 { 00:17:59.952 "name": "BaseBdev3", 00:17:59.952 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:17:59.952 "is_configured": true, 00:17:59.952 "data_offset": 0, 00:17:59.952 "data_size": 65536 00:17:59.952 }, 00:17:59.952 { 00:17:59.952 "name": "BaseBdev4", 00:17:59.952 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:17:59.952 "is_configured": true, 00:17:59.952 "data_offset": 0, 00:17:59.952 "data_size": 65536 00:17:59.952 } 00:17:59.952 ] 00:17:59.952 }' 00:17:59.952 11:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.952 11:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.211 11:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:00.211 11:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.211 11:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:00.211 11:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:00.211 11:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.211 11:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.211 11:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.211 11:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.211 11:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.211 11:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.211 11:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.211 "name": "raid_bdev1", 00:18:00.211 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:18:00.211 "strip_size_kb": 64, 00:18:00.211 "state": "online", 00:18:00.211 "raid_level": "raid5f", 00:18:00.211 "superblock": false, 00:18:00.211 "num_base_bdevs": 4, 00:18:00.211 "num_base_bdevs_discovered": 3, 00:18:00.211 "num_base_bdevs_operational": 3, 00:18:00.211 "base_bdevs_list": [ 00:18:00.211 { 00:18:00.211 "name": null, 00:18:00.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.211 "is_configured": false, 00:18:00.211 "data_offset": 0, 00:18:00.211 "data_size": 65536 00:18:00.211 }, 00:18:00.211 { 00:18:00.211 "name": "BaseBdev2", 00:18:00.211 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:18:00.211 "is_configured": true, 00:18:00.211 "data_offset": 0, 00:18:00.211 "data_size": 65536 00:18:00.211 }, 00:18:00.211 { 00:18:00.211 "name": "BaseBdev3", 00:18:00.211 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:18:00.211 "is_configured": true, 00:18:00.211 "data_offset": 0, 00:18:00.212 "data_size": 65536 00:18:00.212 }, 00:18:00.212 { 00:18:00.212 "name": "BaseBdev4", 00:18:00.212 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:18:00.212 "is_configured": true, 00:18:00.212 "data_offset": 0, 00:18:00.212 "data_size": 65536 00:18:00.212 } 00:18:00.212 ] 00:18:00.212 }' 00:18:00.212 11:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.212 11:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:00.212 11:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.212 11:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:00.212 11:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:00.212 11:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.212 11:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.212 [2024-11-05 11:33:59.455011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.212 [2024-11-05 11:33:59.468643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:18:00.212 11:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.212 11:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:00.212 [2024-11-05 11:33:59.477145] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:01.592 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.592 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.592 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.592 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.592 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.592 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.592 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.592 11:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.592 11:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.592 11:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.592 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.592 "name": "raid_bdev1", 00:18:01.592 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:18:01.592 "strip_size_kb": 64, 00:18:01.592 "state": "online", 00:18:01.592 "raid_level": "raid5f", 00:18:01.592 "superblock": false, 00:18:01.592 "num_base_bdevs": 4, 00:18:01.592 "num_base_bdevs_discovered": 4, 00:18:01.592 "num_base_bdevs_operational": 4, 00:18:01.592 "process": { 00:18:01.592 "type": "rebuild", 00:18:01.592 "target": "spare", 00:18:01.592 "progress": { 00:18:01.592 "blocks": 19200, 00:18:01.592 "percent": 9 00:18:01.592 } 00:18:01.592 }, 00:18:01.592 "base_bdevs_list": [ 00:18:01.592 { 00:18:01.592 "name": "spare", 00:18:01.592 "uuid": "d0100df5-6b2d-50cd-ba3d-abe0926dbeeb", 00:18:01.592 "is_configured": true, 00:18:01.592 "data_offset": 0, 00:18:01.592 "data_size": 65536 00:18:01.592 }, 00:18:01.592 { 00:18:01.592 "name": "BaseBdev2", 00:18:01.592 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:18:01.592 "is_configured": true, 00:18:01.592 "data_offset": 0, 00:18:01.592 "data_size": 65536 00:18:01.592 }, 00:18:01.592 { 00:18:01.592 "name": "BaseBdev3", 00:18:01.592 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:18:01.592 "is_configured": true, 00:18:01.592 "data_offset": 0, 00:18:01.592 "data_size": 65536 00:18:01.592 }, 00:18:01.592 { 00:18:01.592 "name": "BaseBdev4", 00:18:01.592 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:18:01.592 "is_configured": true, 00:18:01.592 "data_offset": 0, 00:18:01.592 "data_size": 65536 00:18:01.592 } 00:18:01.592 ] 00:18:01.592 }' 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=610 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.593 "name": "raid_bdev1", 00:18:01.593 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:18:01.593 "strip_size_kb": 64, 00:18:01.593 "state": "online", 00:18:01.593 "raid_level": "raid5f", 00:18:01.593 "superblock": false, 00:18:01.593 "num_base_bdevs": 4, 00:18:01.593 "num_base_bdevs_discovered": 4, 00:18:01.593 "num_base_bdevs_operational": 4, 00:18:01.593 "process": { 00:18:01.593 "type": "rebuild", 00:18:01.593 "target": "spare", 00:18:01.593 "progress": { 00:18:01.593 "blocks": 21120, 00:18:01.593 "percent": 10 00:18:01.593 } 00:18:01.593 }, 00:18:01.593 "base_bdevs_list": [ 00:18:01.593 { 00:18:01.593 "name": "spare", 00:18:01.593 "uuid": "d0100df5-6b2d-50cd-ba3d-abe0926dbeeb", 00:18:01.593 "is_configured": true, 00:18:01.593 "data_offset": 0, 00:18:01.593 "data_size": 65536 00:18:01.593 }, 00:18:01.593 { 00:18:01.593 "name": "BaseBdev2", 00:18:01.593 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:18:01.593 "is_configured": true, 00:18:01.593 "data_offset": 0, 00:18:01.593 "data_size": 65536 00:18:01.593 }, 00:18:01.593 { 00:18:01.593 "name": "BaseBdev3", 00:18:01.593 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:18:01.593 "is_configured": true, 00:18:01.593 "data_offset": 0, 00:18:01.593 "data_size": 65536 00:18:01.593 }, 00:18:01.593 { 00:18:01.593 "name": "BaseBdev4", 00:18:01.593 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:18:01.593 "is_configured": true, 00:18:01.593 "data_offset": 0, 00:18:01.593 "data_size": 65536 00:18:01.593 } 00:18:01.593 ] 00:18:01.593 }' 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.593 11:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:02.530 11:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.530 11:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.530 11:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.530 11:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.530 11:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.530 11:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.530 11:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.530 11:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.530 11:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.530 11:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.788 11:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.788 11:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.788 "name": "raid_bdev1", 00:18:02.788 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:18:02.788 "strip_size_kb": 64, 00:18:02.788 "state": "online", 00:18:02.788 "raid_level": "raid5f", 00:18:02.788 "superblock": false, 00:18:02.788 "num_base_bdevs": 4, 00:18:02.788 "num_base_bdevs_discovered": 4, 00:18:02.788 "num_base_bdevs_operational": 4, 00:18:02.788 "process": { 00:18:02.788 "type": "rebuild", 00:18:02.788 "target": "spare", 00:18:02.788 "progress": { 00:18:02.788 "blocks": 44160, 00:18:02.788 "percent": 22 00:18:02.788 } 00:18:02.788 }, 00:18:02.788 "base_bdevs_list": [ 00:18:02.788 { 00:18:02.788 "name": "spare", 00:18:02.788 "uuid": "d0100df5-6b2d-50cd-ba3d-abe0926dbeeb", 00:18:02.788 "is_configured": true, 00:18:02.788 "data_offset": 0, 00:18:02.788 "data_size": 65536 00:18:02.788 }, 00:18:02.788 { 00:18:02.788 "name": "BaseBdev2", 00:18:02.788 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:18:02.788 "is_configured": true, 00:18:02.788 "data_offset": 0, 00:18:02.788 "data_size": 65536 00:18:02.788 }, 00:18:02.788 { 00:18:02.788 "name": "BaseBdev3", 00:18:02.788 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:18:02.788 "is_configured": true, 00:18:02.788 "data_offset": 0, 00:18:02.788 "data_size": 65536 00:18:02.788 }, 00:18:02.788 { 00:18:02.788 "name": "BaseBdev4", 00:18:02.788 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:18:02.788 "is_configured": true, 00:18:02.788 "data_offset": 0, 00:18:02.789 "data_size": 65536 00:18:02.789 } 00:18:02.789 ] 00:18:02.789 }' 00:18:02.789 11:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.789 11:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.789 11:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.789 11:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.789 11:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:03.725 11:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:03.725 11:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.725 11:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.725 11:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.725 11:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.725 11:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.725 11:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.725 11:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.725 11:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.725 11:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.725 11:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.725 11:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.725 "name": "raid_bdev1", 00:18:03.725 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:18:03.725 "strip_size_kb": 64, 00:18:03.725 "state": "online", 00:18:03.725 "raid_level": "raid5f", 00:18:03.725 "superblock": false, 00:18:03.725 "num_base_bdevs": 4, 00:18:03.725 "num_base_bdevs_discovered": 4, 00:18:03.725 "num_base_bdevs_operational": 4, 00:18:03.725 "process": { 00:18:03.725 "type": "rebuild", 00:18:03.725 "target": "spare", 00:18:03.725 "progress": { 00:18:03.725 "blocks": 65280, 00:18:03.725 "percent": 33 00:18:03.725 } 00:18:03.725 }, 00:18:03.725 "base_bdevs_list": [ 00:18:03.725 { 00:18:03.725 "name": "spare", 00:18:03.725 "uuid": "d0100df5-6b2d-50cd-ba3d-abe0926dbeeb", 00:18:03.725 "is_configured": true, 00:18:03.725 "data_offset": 0, 00:18:03.725 "data_size": 65536 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "name": "BaseBdev2", 00:18:03.725 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:18:03.725 "is_configured": true, 00:18:03.725 "data_offset": 0, 00:18:03.725 "data_size": 65536 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "name": "BaseBdev3", 00:18:03.725 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:18:03.725 "is_configured": true, 00:18:03.725 "data_offset": 0, 00:18:03.725 "data_size": 65536 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "name": "BaseBdev4", 00:18:03.725 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:18:03.725 "is_configured": true, 00:18:03.725 "data_offset": 0, 00:18:03.725 "data_size": 65536 00:18:03.725 } 00:18:03.725 ] 00:18:03.725 }' 00:18:03.725 11:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.986 11:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:03.986 11:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.986 11:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.986 11:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:04.952 11:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:04.952 11:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.952 11:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.952 11:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.952 11:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.952 11:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.952 11:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.952 11:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.952 11:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.952 11:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.952 11:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.952 11:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.952 "name": "raid_bdev1", 00:18:04.952 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:18:04.952 "strip_size_kb": 64, 00:18:04.952 "state": "online", 00:18:04.952 "raid_level": "raid5f", 00:18:04.952 "superblock": false, 00:18:04.952 "num_base_bdevs": 4, 00:18:04.952 "num_base_bdevs_discovered": 4, 00:18:04.952 "num_base_bdevs_operational": 4, 00:18:04.952 "process": { 00:18:04.952 "type": "rebuild", 00:18:04.952 "target": "spare", 00:18:04.952 "progress": { 00:18:04.952 "blocks": 88320, 00:18:04.952 "percent": 44 00:18:04.952 } 00:18:04.952 }, 00:18:04.952 "base_bdevs_list": [ 00:18:04.952 { 00:18:04.952 "name": "spare", 00:18:04.952 "uuid": "d0100df5-6b2d-50cd-ba3d-abe0926dbeeb", 00:18:04.952 "is_configured": true, 00:18:04.952 "data_offset": 0, 00:18:04.952 "data_size": 65536 00:18:04.952 }, 00:18:04.952 { 00:18:04.952 "name": "BaseBdev2", 00:18:04.952 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:18:04.952 "is_configured": true, 00:18:04.952 "data_offset": 0, 00:18:04.952 "data_size": 65536 00:18:04.952 }, 00:18:04.952 { 00:18:04.952 "name": "BaseBdev3", 00:18:04.952 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:18:04.952 "is_configured": true, 00:18:04.952 "data_offset": 0, 00:18:04.952 "data_size": 65536 00:18:04.952 }, 00:18:04.952 { 00:18:04.952 "name": "BaseBdev4", 00:18:04.952 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:18:04.952 "is_configured": true, 00:18:04.952 "data_offset": 0, 00:18:04.952 "data_size": 65536 00:18:04.952 } 00:18:04.952 ] 00:18:04.952 }' 00:18:04.952 11:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.952 11:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.952 11:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.211 11:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.211 11:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.149 "name": "raid_bdev1", 00:18:06.149 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:18:06.149 "strip_size_kb": 64, 00:18:06.149 "state": "online", 00:18:06.149 "raid_level": "raid5f", 00:18:06.149 "superblock": false, 00:18:06.149 "num_base_bdevs": 4, 00:18:06.149 "num_base_bdevs_discovered": 4, 00:18:06.149 "num_base_bdevs_operational": 4, 00:18:06.149 "process": { 00:18:06.149 "type": "rebuild", 00:18:06.149 "target": "spare", 00:18:06.149 "progress": { 00:18:06.149 "blocks": 109440, 00:18:06.149 "percent": 55 00:18:06.149 } 00:18:06.149 }, 00:18:06.149 "base_bdevs_list": [ 00:18:06.149 { 00:18:06.149 "name": "spare", 00:18:06.149 "uuid": "d0100df5-6b2d-50cd-ba3d-abe0926dbeeb", 00:18:06.149 "is_configured": true, 00:18:06.149 "data_offset": 0, 00:18:06.149 "data_size": 65536 00:18:06.149 }, 00:18:06.149 { 00:18:06.149 "name": "BaseBdev2", 00:18:06.149 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:18:06.149 "is_configured": true, 00:18:06.149 "data_offset": 0, 00:18:06.149 "data_size": 65536 00:18:06.149 }, 00:18:06.149 { 00:18:06.149 "name": "BaseBdev3", 00:18:06.149 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:18:06.149 "is_configured": true, 00:18:06.149 "data_offset": 0, 00:18:06.149 "data_size": 65536 00:18:06.149 }, 00:18:06.149 { 00:18:06.149 "name": "BaseBdev4", 00:18:06.149 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:18:06.149 "is_configured": true, 00:18:06.149 "data_offset": 0, 00:18:06.149 "data_size": 65536 00:18:06.149 } 00:18:06.149 ] 00:18:06.149 }' 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.149 11:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:07.527 11:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:07.527 11:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.527 11:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.527 11:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.527 11:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.527 11:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.527 11:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.527 11:34:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.527 11:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.527 11:34:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.527 11:34:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.527 11:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.527 "name": "raid_bdev1", 00:18:07.527 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:18:07.527 "strip_size_kb": 64, 00:18:07.527 "state": "online", 00:18:07.527 "raid_level": "raid5f", 00:18:07.527 "superblock": false, 00:18:07.527 "num_base_bdevs": 4, 00:18:07.527 "num_base_bdevs_discovered": 4, 00:18:07.527 "num_base_bdevs_operational": 4, 00:18:07.527 "process": { 00:18:07.527 "type": "rebuild", 00:18:07.527 "target": "spare", 00:18:07.527 "progress": { 00:18:07.527 "blocks": 130560, 00:18:07.527 "percent": 66 00:18:07.527 } 00:18:07.527 }, 00:18:07.527 "base_bdevs_list": [ 00:18:07.527 { 00:18:07.527 "name": "spare", 00:18:07.527 "uuid": "d0100df5-6b2d-50cd-ba3d-abe0926dbeeb", 00:18:07.527 "is_configured": true, 00:18:07.527 "data_offset": 0, 00:18:07.527 "data_size": 65536 00:18:07.527 }, 00:18:07.527 { 00:18:07.527 "name": "BaseBdev2", 00:18:07.527 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:18:07.527 "is_configured": true, 00:18:07.527 "data_offset": 0, 00:18:07.527 "data_size": 65536 00:18:07.527 }, 00:18:07.527 { 00:18:07.527 "name": "BaseBdev3", 00:18:07.527 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:18:07.527 "is_configured": true, 00:18:07.527 "data_offset": 0, 00:18:07.527 "data_size": 65536 00:18:07.527 }, 00:18:07.527 { 00:18:07.527 "name": "BaseBdev4", 00:18:07.527 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:18:07.527 "is_configured": true, 00:18:07.527 "data_offset": 0, 00:18:07.527 "data_size": 65536 00:18:07.527 } 00:18:07.527 ] 00:18:07.527 }' 00:18:07.528 11:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.528 11:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.528 11:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.528 11:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.528 11:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.463 "name": "raid_bdev1", 00:18:08.463 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:18:08.463 "strip_size_kb": 64, 00:18:08.463 "state": "online", 00:18:08.463 "raid_level": "raid5f", 00:18:08.463 "superblock": false, 00:18:08.463 "num_base_bdevs": 4, 00:18:08.463 "num_base_bdevs_discovered": 4, 00:18:08.463 "num_base_bdevs_operational": 4, 00:18:08.463 "process": { 00:18:08.463 "type": "rebuild", 00:18:08.463 "target": "spare", 00:18:08.463 "progress": { 00:18:08.463 "blocks": 153600, 00:18:08.463 "percent": 78 00:18:08.463 } 00:18:08.463 }, 00:18:08.463 "base_bdevs_list": [ 00:18:08.463 { 00:18:08.463 "name": "spare", 00:18:08.463 "uuid": "d0100df5-6b2d-50cd-ba3d-abe0926dbeeb", 00:18:08.463 "is_configured": true, 00:18:08.463 "data_offset": 0, 00:18:08.463 "data_size": 65536 00:18:08.463 }, 00:18:08.463 { 00:18:08.463 "name": "BaseBdev2", 00:18:08.463 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:18:08.463 "is_configured": true, 00:18:08.463 "data_offset": 0, 00:18:08.463 "data_size": 65536 00:18:08.463 }, 00:18:08.463 { 00:18:08.463 "name": "BaseBdev3", 00:18:08.463 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:18:08.463 "is_configured": true, 00:18:08.463 "data_offset": 0, 00:18:08.463 "data_size": 65536 00:18:08.463 }, 00:18:08.463 { 00:18:08.463 "name": "BaseBdev4", 00:18:08.463 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:18:08.463 "is_configured": true, 00:18:08.463 "data_offset": 0, 00:18:08.463 "data_size": 65536 00:18:08.463 } 00:18:08.463 ] 00:18:08.463 }' 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.463 11:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:09.406 11:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:09.406 11:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.406 11:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.406 11:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.406 11:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.406 11:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.406 11:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.406 11:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.406 11:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.665 11:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.665 11:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.665 11:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.665 "name": "raid_bdev1", 00:18:09.665 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:18:09.665 "strip_size_kb": 64, 00:18:09.665 "state": "online", 00:18:09.665 "raid_level": "raid5f", 00:18:09.665 "superblock": false, 00:18:09.665 "num_base_bdevs": 4, 00:18:09.665 "num_base_bdevs_discovered": 4, 00:18:09.665 "num_base_bdevs_operational": 4, 00:18:09.665 "process": { 00:18:09.665 "type": "rebuild", 00:18:09.665 "target": "spare", 00:18:09.665 "progress": { 00:18:09.665 "blocks": 174720, 00:18:09.665 "percent": 88 00:18:09.665 } 00:18:09.665 }, 00:18:09.665 "base_bdevs_list": [ 00:18:09.665 { 00:18:09.665 "name": "spare", 00:18:09.665 "uuid": "d0100df5-6b2d-50cd-ba3d-abe0926dbeeb", 00:18:09.665 "is_configured": true, 00:18:09.665 "data_offset": 0, 00:18:09.665 "data_size": 65536 00:18:09.665 }, 00:18:09.665 { 00:18:09.665 "name": "BaseBdev2", 00:18:09.665 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:18:09.665 "is_configured": true, 00:18:09.665 "data_offset": 0, 00:18:09.665 "data_size": 65536 00:18:09.665 }, 00:18:09.665 { 00:18:09.665 "name": "BaseBdev3", 00:18:09.665 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:18:09.665 "is_configured": true, 00:18:09.665 "data_offset": 0, 00:18:09.665 "data_size": 65536 00:18:09.665 }, 00:18:09.665 { 00:18:09.665 "name": "BaseBdev4", 00:18:09.665 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:18:09.665 "is_configured": true, 00:18:09.665 "data_offset": 0, 00:18:09.665 "data_size": 65536 00:18:09.665 } 00:18:09.665 ] 00:18:09.665 }' 00:18:09.665 11:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.665 11:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.665 11:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.665 11:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.665 11:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:10.602 [2024-11-05 11:34:09.818261] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:10.602 [2024-11-05 11:34:09.818327] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:10.602 [2024-11-05 11:34:09.818362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.602 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:10.602 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.602 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.602 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.602 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.602 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.602 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.602 11:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.602 11:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.602 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.602 11:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.602 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.602 "name": "raid_bdev1", 00:18:10.602 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:18:10.602 "strip_size_kb": 64, 00:18:10.602 "state": "online", 00:18:10.602 "raid_level": "raid5f", 00:18:10.602 "superblock": false, 00:18:10.602 "num_base_bdevs": 4, 00:18:10.602 "num_base_bdevs_discovered": 4, 00:18:10.602 "num_base_bdevs_operational": 4, 00:18:10.602 "base_bdevs_list": [ 00:18:10.602 { 00:18:10.602 "name": "spare", 00:18:10.602 "uuid": "d0100df5-6b2d-50cd-ba3d-abe0926dbeeb", 00:18:10.602 "is_configured": true, 00:18:10.602 "data_offset": 0, 00:18:10.602 "data_size": 65536 00:18:10.602 }, 00:18:10.602 { 00:18:10.602 "name": "BaseBdev2", 00:18:10.602 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:18:10.602 "is_configured": true, 00:18:10.602 "data_offset": 0, 00:18:10.602 "data_size": 65536 00:18:10.602 }, 00:18:10.602 { 00:18:10.602 "name": "BaseBdev3", 00:18:10.602 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:18:10.602 "is_configured": true, 00:18:10.602 "data_offset": 0, 00:18:10.602 "data_size": 65536 00:18:10.603 }, 00:18:10.603 { 00:18:10.603 "name": "BaseBdev4", 00:18:10.603 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:18:10.603 "is_configured": true, 00:18:10.603 "data_offset": 0, 00:18:10.603 "data_size": 65536 00:18:10.603 } 00:18:10.603 ] 00:18:10.603 }' 00:18:10.603 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.862 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:10.862 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.862 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:10.862 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:10.862 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.862 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.862 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.862 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.862 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.862 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.862 11:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.862 11:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.862 11:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.862 11:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.862 "name": "raid_bdev1", 00:18:10.862 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:18:10.862 "strip_size_kb": 64, 00:18:10.862 "state": "online", 00:18:10.862 "raid_level": "raid5f", 00:18:10.862 "superblock": false, 00:18:10.862 "num_base_bdevs": 4, 00:18:10.862 "num_base_bdevs_discovered": 4, 00:18:10.862 "num_base_bdevs_operational": 4, 00:18:10.862 "base_bdevs_list": [ 00:18:10.862 { 00:18:10.862 "name": "spare", 00:18:10.862 "uuid": "d0100df5-6b2d-50cd-ba3d-abe0926dbeeb", 00:18:10.862 "is_configured": true, 00:18:10.862 "data_offset": 0, 00:18:10.862 "data_size": 65536 00:18:10.862 }, 00:18:10.862 { 00:18:10.862 "name": "BaseBdev2", 00:18:10.862 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:18:10.862 "is_configured": true, 00:18:10.862 "data_offset": 0, 00:18:10.862 "data_size": 65536 00:18:10.862 }, 00:18:10.862 { 00:18:10.862 "name": "BaseBdev3", 00:18:10.862 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:18:10.862 "is_configured": true, 00:18:10.862 "data_offset": 0, 00:18:10.862 "data_size": 65536 00:18:10.862 }, 00:18:10.862 { 00:18:10.862 "name": "BaseBdev4", 00:18:10.862 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:18:10.862 "is_configured": true, 00:18:10.862 "data_offset": 0, 00:18:10.862 "data_size": 65536 00:18:10.862 } 00:18:10.862 ] 00:18:10.862 }' 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.862 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.121 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.121 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.121 "name": "raid_bdev1", 00:18:11.121 "uuid": "9fc90da6-48d4-4b19-a73b-c972b58a5cd3", 00:18:11.121 "strip_size_kb": 64, 00:18:11.121 "state": "online", 00:18:11.121 "raid_level": "raid5f", 00:18:11.121 "superblock": false, 00:18:11.121 "num_base_bdevs": 4, 00:18:11.121 "num_base_bdevs_discovered": 4, 00:18:11.121 "num_base_bdevs_operational": 4, 00:18:11.121 "base_bdevs_list": [ 00:18:11.121 { 00:18:11.121 "name": "spare", 00:18:11.121 "uuid": "d0100df5-6b2d-50cd-ba3d-abe0926dbeeb", 00:18:11.121 "is_configured": true, 00:18:11.121 "data_offset": 0, 00:18:11.121 "data_size": 65536 00:18:11.121 }, 00:18:11.121 { 00:18:11.121 "name": "BaseBdev2", 00:18:11.121 "uuid": "1709e23f-8583-51f6-bf9c-5f1d2aaae5a8", 00:18:11.121 "is_configured": true, 00:18:11.121 "data_offset": 0, 00:18:11.121 "data_size": 65536 00:18:11.121 }, 00:18:11.121 { 00:18:11.121 "name": "BaseBdev3", 00:18:11.121 "uuid": "43b5144a-b5c8-5dba-90e2-78e4ed7c6e82", 00:18:11.121 "is_configured": true, 00:18:11.121 "data_offset": 0, 00:18:11.121 "data_size": 65536 00:18:11.121 }, 00:18:11.121 { 00:18:11.121 "name": "BaseBdev4", 00:18:11.121 "uuid": "1853609e-fcdb-5a3b-b144-d79203d0fec0", 00:18:11.121 "is_configured": true, 00:18:11.121 "data_offset": 0, 00:18:11.121 "data_size": 65536 00:18:11.121 } 00:18:11.121 ] 00:18:11.121 }' 00:18:11.121 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.121 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.380 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:11.380 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.380 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.380 [2024-11-05 11:34:10.598831] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.380 [2024-11-05 11:34:10.598871] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.380 [2024-11-05 11:34:10.598941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.380 [2024-11-05 11:34:10.599028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.380 [2024-11-05 11:34:10.599059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:11.381 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:11.640 /dev/nbd0 00:18:11.640 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:11.640 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:11.640 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:11.640 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:11.640 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:11.640 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:11.640 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:11.640 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:11.640 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:11.640 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:11.640 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:11.640 1+0 records in 00:18:11.640 1+0 records out 00:18:11.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318447 s, 12.9 MB/s 00:18:11.640 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:11.908 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:11.909 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:11.909 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:11.909 11:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:11.909 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:11.909 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:11.909 11:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:11.909 /dev/nbd1 00:18:11.909 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:11.909 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:11.909 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:11.909 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:11.909 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:11.909 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:11.909 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:11.909 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:11.909 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:11.909 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:11.909 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:11.909 1+0 records in 00:18:11.909 1+0 records out 00:18:11.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391831 s, 10.5 MB/s 00:18:11.909 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:11.909 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:11.909 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:12.172 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:12.172 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:12.172 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:12.172 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:12.172 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:12.172 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:12.172 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:12.172 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:12.172 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:12.172 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:12.172 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:12.172 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:12.431 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:12.431 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:12.431 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:12.431 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:12.431 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:12.431 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:12.431 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:12.431 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:12.431 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:12.431 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84624 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 84624 ']' 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 84624 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84624 00:18:12.691 killing process with pid 84624 00:18:12.691 Received shutdown signal, test time was about 60.000000 seconds 00:18:12.691 00:18:12.691 Latency(us) 00:18:12.691 [2024-11-05T11:34:11.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.691 [2024-11-05T11:34:11.965Z] =================================================================================================================== 00:18:12.691 [2024-11-05T11:34:11.965Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84624' 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 84624 00:18:12.691 [2024-11-05 11:34:11.833354] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:12.691 11:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 84624 00:18:13.259 [2024-11-05 11:34:12.294990] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:14.198 00:18:14.198 real 0m18.906s 00:18:14.198 user 0m22.766s 00:18:14.198 sys 0m2.266s 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.198 ************************************ 00:18:14.198 END TEST raid5f_rebuild_test 00:18:14.198 ************************************ 00:18:14.198 11:34:13 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:14.198 11:34:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:14.198 11:34:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:14.198 11:34:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:14.198 ************************************ 00:18:14.198 START TEST raid5f_rebuild_test_sb 00:18:14.198 ************************************ 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:14.198 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85127 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85127 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 85127 ']' 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:14.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:14.199 11:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.458 [2024-11-05 11:34:13.509196] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:18:14.458 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:14.458 Zero copy mechanism will not be used. 00:18:14.459 [2024-11-05 11:34:13.509819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85127 ] 00:18:14.459 [2024-11-05 11:34:13.689373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.718 [2024-11-05 11:34:13.795813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.718 [2024-11-05 11:34:13.985062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.718 [2024-11-05 11:34:13.985098] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.288 BaseBdev1_malloc 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.288 [2024-11-05 11:34:14.357277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:15.288 [2024-11-05 11:34:14.357353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.288 [2024-11-05 11:34:14.357376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:15.288 [2024-11-05 11:34:14.357387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.288 [2024-11-05 11:34:14.359414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.288 [2024-11-05 11:34:14.359452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:15.288 BaseBdev1 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.288 BaseBdev2_malloc 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:15.288 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.289 [2024-11-05 11:34:14.409896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:15.289 [2024-11-05 11:34:14.409964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.289 [2024-11-05 11:34:14.409982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:15.289 [2024-11-05 11:34:14.409993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.289 [2024-11-05 11:34:14.411953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.289 [2024-11-05 11:34:14.411991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:15.289 BaseBdev2 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.289 BaseBdev3_malloc 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.289 [2024-11-05 11:34:14.496684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:15.289 [2024-11-05 11:34:14.496749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.289 [2024-11-05 11:34:14.496768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:15.289 [2024-11-05 11:34:14.496779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.289 [2024-11-05 11:34:14.498819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.289 [2024-11-05 11:34:14.498858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:15.289 BaseBdev3 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.289 BaseBdev4_malloc 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.289 [2024-11-05 11:34:14.545740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:15.289 [2024-11-05 11:34:14.545803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.289 [2024-11-05 11:34:14.545819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:15.289 [2024-11-05 11:34:14.545829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.289 [2024-11-05 11:34:14.547859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.289 [2024-11-05 11:34:14.547900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:15.289 BaseBdev4 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.289 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.549 spare_malloc 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.549 spare_delay 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.549 [2024-11-05 11:34:14.611102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:15.549 [2024-11-05 11:34:14.611185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.549 [2024-11-05 11:34:14.611204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:15.549 [2024-11-05 11:34:14.611215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.549 [2024-11-05 11:34:14.613208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.549 [2024-11-05 11:34:14.613242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:15.549 spare 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.549 [2024-11-05 11:34:14.623144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.549 [2024-11-05 11:34:14.624894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:15.549 [2024-11-05 11:34:14.624971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:15.549 [2024-11-05 11:34:14.625019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:15.549 [2024-11-05 11:34:14.625199] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:15.549 [2024-11-05 11:34:14.625221] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:15.549 [2024-11-05 11:34:14.625467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:15.549 [2024-11-05 11:34:14.632523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:15.549 [2024-11-05 11:34:14.632544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:15.549 [2024-11-05 11:34:14.632740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.549 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.549 "name": "raid_bdev1", 00:18:15.549 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:15.549 "strip_size_kb": 64, 00:18:15.549 "state": "online", 00:18:15.549 "raid_level": "raid5f", 00:18:15.549 "superblock": true, 00:18:15.549 "num_base_bdevs": 4, 00:18:15.549 "num_base_bdevs_discovered": 4, 00:18:15.550 "num_base_bdevs_operational": 4, 00:18:15.550 "base_bdevs_list": [ 00:18:15.550 { 00:18:15.550 "name": "BaseBdev1", 00:18:15.550 "uuid": "7f2752a7-cf2b-5eac-93c1-17fb633f8624", 00:18:15.550 "is_configured": true, 00:18:15.550 "data_offset": 2048, 00:18:15.550 "data_size": 63488 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "name": "BaseBdev2", 00:18:15.550 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:15.550 "is_configured": true, 00:18:15.550 "data_offset": 2048, 00:18:15.550 "data_size": 63488 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "name": "BaseBdev3", 00:18:15.550 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:15.550 "is_configured": true, 00:18:15.550 "data_offset": 2048, 00:18:15.550 "data_size": 63488 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "name": "BaseBdev4", 00:18:15.550 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:15.550 "is_configured": true, 00:18:15.550 "data_offset": 2048, 00:18:15.550 "data_size": 63488 00:18:15.550 } 00:18:15.550 ] 00:18:15.550 }' 00:18:15.550 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.550 11:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.809 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:15.809 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:15.809 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.809 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.809 [2024-11-05 11:34:15.060160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.809 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.069 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:18:16.069 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.069 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.069 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.069 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:16.069 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.070 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:16.070 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:16.070 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:16.070 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:16.070 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:16.070 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:16.070 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:16.070 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:16.070 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:16.070 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:16.070 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:16.070 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:16.070 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:16.070 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:16.070 [2024-11-05 11:34:15.327531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:16.330 /dev/nbd0 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:16.330 1+0 records in 00:18:16.330 1+0 records out 00:18:16.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302459 s, 13.5 MB/s 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:16.330 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:18:16.590 496+0 records in 00:18:16.590 496+0 records out 00:18:16.590 97517568 bytes (98 MB, 93 MiB) copied, 0.438943 s, 222 MB/s 00:18:16.590 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:16.590 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:16.590 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:16.590 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:16.590 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:16.590 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:16.590 11:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:16.849 [2024-11-05 11:34:16.060288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.849 [2024-11-05 11:34:16.088936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.849 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.109 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.109 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.109 "name": "raid_bdev1", 00:18:17.109 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:17.109 "strip_size_kb": 64, 00:18:17.109 "state": "online", 00:18:17.109 "raid_level": "raid5f", 00:18:17.109 "superblock": true, 00:18:17.109 "num_base_bdevs": 4, 00:18:17.109 "num_base_bdevs_discovered": 3, 00:18:17.109 "num_base_bdevs_operational": 3, 00:18:17.109 "base_bdevs_list": [ 00:18:17.109 { 00:18:17.109 "name": null, 00:18:17.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.109 "is_configured": false, 00:18:17.109 "data_offset": 0, 00:18:17.109 "data_size": 63488 00:18:17.109 }, 00:18:17.109 { 00:18:17.109 "name": "BaseBdev2", 00:18:17.109 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:17.109 "is_configured": true, 00:18:17.109 "data_offset": 2048, 00:18:17.109 "data_size": 63488 00:18:17.109 }, 00:18:17.109 { 00:18:17.109 "name": "BaseBdev3", 00:18:17.109 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:17.109 "is_configured": true, 00:18:17.109 "data_offset": 2048, 00:18:17.109 "data_size": 63488 00:18:17.109 }, 00:18:17.109 { 00:18:17.109 "name": "BaseBdev4", 00:18:17.109 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:17.109 "is_configured": true, 00:18:17.109 "data_offset": 2048, 00:18:17.109 "data_size": 63488 00:18:17.109 } 00:18:17.109 ] 00:18:17.109 }' 00:18:17.109 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.109 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.369 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:17.369 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.369 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.369 [2024-11-05 11:34:16.528397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.369 [2024-11-05 11:34:16.544042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:18:17.369 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.369 11:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:17.369 [2024-11-05 11:34:16.552905] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:18.308 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.308 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.308 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.308 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.308 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.308 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.308 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.308 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.308 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.308 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.568 "name": "raid_bdev1", 00:18:18.568 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:18.568 "strip_size_kb": 64, 00:18:18.568 "state": "online", 00:18:18.568 "raid_level": "raid5f", 00:18:18.568 "superblock": true, 00:18:18.568 "num_base_bdevs": 4, 00:18:18.568 "num_base_bdevs_discovered": 4, 00:18:18.568 "num_base_bdevs_operational": 4, 00:18:18.568 "process": { 00:18:18.568 "type": "rebuild", 00:18:18.568 "target": "spare", 00:18:18.568 "progress": { 00:18:18.568 "blocks": 19200, 00:18:18.568 "percent": 10 00:18:18.568 } 00:18:18.568 }, 00:18:18.568 "base_bdevs_list": [ 00:18:18.568 { 00:18:18.568 "name": "spare", 00:18:18.568 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:18.568 "is_configured": true, 00:18:18.568 "data_offset": 2048, 00:18:18.568 "data_size": 63488 00:18:18.568 }, 00:18:18.568 { 00:18:18.568 "name": "BaseBdev2", 00:18:18.568 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:18.568 "is_configured": true, 00:18:18.568 "data_offset": 2048, 00:18:18.568 "data_size": 63488 00:18:18.568 }, 00:18:18.568 { 00:18:18.568 "name": "BaseBdev3", 00:18:18.568 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:18.568 "is_configured": true, 00:18:18.568 "data_offset": 2048, 00:18:18.568 "data_size": 63488 00:18:18.568 }, 00:18:18.568 { 00:18:18.568 "name": "BaseBdev4", 00:18:18.568 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:18.568 "is_configured": true, 00:18:18.568 "data_offset": 2048, 00:18:18.568 "data_size": 63488 00:18:18.568 } 00:18:18.568 ] 00:18:18.568 }' 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.568 [2024-11-05 11:34:17.707505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.568 [2024-11-05 11:34:17.758210] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:18.568 [2024-11-05 11:34:17.758268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.568 [2024-11-05 11:34:17.758284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.568 [2024-11-05 11:34:17.758292] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.568 "name": "raid_bdev1", 00:18:18.568 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:18.568 "strip_size_kb": 64, 00:18:18.568 "state": "online", 00:18:18.568 "raid_level": "raid5f", 00:18:18.568 "superblock": true, 00:18:18.568 "num_base_bdevs": 4, 00:18:18.568 "num_base_bdevs_discovered": 3, 00:18:18.568 "num_base_bdevs_operational": 3, 00:18:18.568 "base_bdevs_list": [ 00:18:18.568 { 00:18:18.568 "name": null, 00:18:18.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.568 "is_configured": false, 00:18:18.568 "data_offset": 0, 00:18:18.568 "data_size": 63488 00:18:18.568 }, 00:18:18.568 { 00:18:18.568 "name": "BaseBdev2", 00:18:18.568 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:18.568 "is_configured": true, 00:18:18.568 "data_offset": 2048, 00:18:18.568 "data_size": 63488 00:18:18.568 }, 00:18:18.568 { 00:18:18.568 "name": "BaseBdev3", 00:18:18.568 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:18.568 "is_configured": true, 00:18:18.568 "data_offset": 2048, 00:18:18.568 "data_size": 63488 00:18:18.568 }, 00:18:18.568 { 00:18:18.568 "name": "BaseBdev4", 00:18:18.568 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:18.568 "is_configured": true, 00:18:18.568 "data_offset": 2048, 00:18:18.568 "data_size": 63488 00:18:18.568 } 00:18:18.568 ] 00:18:18.568 }' 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.568 11:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.138 "name": "raid_bdev1", 00:18:19.138 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:19.138 "strip_size_kb": 64, 00:18:19.138 "state": "online", 00:18:19.138 "raid_level": "raid5f", 00:18:19.138 "superblock": true, 00:18:19.138 "num_base_bdevs": 4, 00:18:19.138 "num_base_bdevs_discovered": 3, 00:18:19.138 "num_base_bdevs_operational": 3, 00:18:19.138 "base_bdevs_list": [ 00:18:19.138 { 00:18:19.138 "name": null, 00:18:19.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.138 "is_configured": false, 00:18:19.138 "data_offset": 0, 00:18:19.138 "data_size": 63488 00:18:19.138 }, 00:18:19.138 { 00:18:19.138 "name": "BaseBdev2", 00:18:19.138 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:19.138 "is_configured": true, 00:18:19.138 "data_offset": 2048, 00:18:19.138 "data_size": 63488 00:18:19.138 }, 00:18:19.138 { 00:18:19.138 "name": "BaseBdev3", 00:18:19.138 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:19.138 "is_configured": true, 00:18:19.138 "data_offset": 2048, 00:18:19.138 "data_size": 63488 00:18:19.138 }, 00:18:19.138 { 00:18:19.138 "name": "BaseBdev4", 00:18:19.138 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:19.138 "is_configured": true, 00:18:19.138 "data_offset": 2048, 00:18:19.138 "data_size": 63488 00:18:19.138 } 00:18:19.138 ] 00:18:19.138 }' 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.138 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.138 [2024-11-05 11:34:18.401715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.397 [2024-11-05 11:34:18.416345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:18:19.397 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.397 11:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:19.397 [2024-11-05 11:34:18.425031] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.336 "name": "raid_bdev1", 00:18:20.336 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:20.336 "strip_size_kb": 64, 00:18:20.336 "state": "online", 00:18:20.336 "raid_level": "raid5f", 00:18:20.336 "superblock": true, 00:18:20.336 "num_base_bdevs": 4, 00:18:20.336 "num_base_bdevs_discovered": 4, 00:18:20.336 "num_base_bdevs_operational": 4, 00:18:20.336 "process": { 00:18:20.336 "type": "rebuild", 00:18:20.336 "target": "spare", 00:18:20.336 "progress": { 00:18:20.336 "blocks": 19200, 00:18:20.336 "percent": 10 00:18:20.336 } 00:18:20.336 }, 00:18:20.336 "base_bdevs_list": [ 00:18:20.336 { 00:18:20.336 "name": "spare", 00:18:20.336 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:20.336 "is_configured": true, 00:18:20.336 "data_offset": 2048, 00:18:20.336 "data_size": 63488 00:18:20.336 }, 00:18:20.336 { 00:18:20.336 "name": "BaseBdev2", 00:18:20.336 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:20.336 "is_configured": true, 00:18:20.336 "data_offset": 2048, 00:18:20.336 "data_size": 63488 00:18:20.336 }, 00:18:20.336 { 00:18:20.336 "name": "BaseBdev3", 00:18:20.336 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:20.336 "is_configured": true, 00:18:20.336 "data_offset": 2048, 00:18:20.336 "data_size": 63488 00:18:20.336 }, 00:18:20.336 { 00:18:20.336 "name": "BaseBdev4", 00:18:20.336 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:20.336 "is_configured": true, 00:18:20.336 "data_offset": 2048, 00:18:20.336 "data_size": 63488 00:18:20.336 } 00:18:20.336 ] 00:18:20.336 }' 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:20.336 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=629 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.336 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.337 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.337 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.337 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.337 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.595 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.595 "name": "raid_bdev1", 00:18:20.595 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:20.595 "strip_size_kb": 64, 00:18:20.595 "state": "online", 00:18:20.595 "raid_level": "raid5f", 00:18:20.595 "superblock": true, 00:18:20.595 "num_base_bdevs": 4, 00:18:20.595 "num_base_bdevs_discovered": 4, 00:18:20.595 "num_base_bdevs_operational": 4, 00:18:20.595 "process": { 00:18:20.595 "type": "rebuild", 00:18:20.595 "target": "spare", 00:18:20.595 "progress": { 00:18:20.595 "blocks": 21120, 00:18:20.595 "percent": 11 00:18:20.595 } 00:18:20.595 }, 00:18:20.595 "base_bdevs_list": [ 00:18:20.595 { 00:18:20.595 "name": "spare", 00:18:20.595 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:20.595 "is_configured": true, 00:18:20.595 "data_offset": 2048, 00:18:20.595 "data_size": 63488 00:18:20.595 }, 00:18:20.595 { 00:18:20.595 "name": "BaseBdev2", 00:18:20.595 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:20.595 "is_configured": true, 00:18:20.595 "data_offset": 2048, 00:18:20.595 "data_size": 63488 00:18:20.595 }, 00:18:20.595 { 00:18:20.595 "name": "BaseBdev3", 00:18:20.595 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:20.595 "is_configured": true, 00:18:20.595 "data_offset": 2048, 00:18:20.595 "data_size": 63488 00:18:20.595 }, 00:18:20.595 { 00:18:20.595 "name": "BaseBdev4", 00:18:20.595 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:20.595 "is_configured": true, 00:18:20.595 "data_offset": 2048, 00:18:20.595 "data_size": 63488 00:18:20.595 } 00:18:20.595 ] 00:18:20.595 }' 00:18:20.595 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.595 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.595 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.595 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.595 11:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:21.533 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.533 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.533 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.533 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.533 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.533 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.533 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.533 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.533 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.533 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.533 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.533 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.533 "name": "raid_bdev1", 00:18:21.533 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:21.533 "strip_size_kb": 64, 00:18:21.533 "state": "online", 00:18:21.533 "raid_level": "raid5f", 00:18:21.533 "superblock": true, 00:18:21.533 "num_base_bdevs": 4, 00:18:21.533 "num_base_bdevs_discovered": 4, 00:18:21.533 "num_base_bdevs_operational": 4, 00:18:21.533 "process": { 00:18:21.533 "type": "rebuild", 00:18:21.533 "target": "spare", 00:18:21.533 "progress": { 00:18:21.533 "blocks": 42240, 00:18:21.533 "percent": 22 00:18:21.533 } 00:18:21.533 }, 00:18:21.533 "base_bdevs_list": [ 00:18:21.533 { 00:18:21.533 "name": "spare", 00:18:21.533 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:21.533 "is_configured": true, 00:18:21.533 "data_offset": 2048, 00:18:21.533 "data_size": 63488 00:18:21.533 }, 00:18:21.533 { 00:18:21.533 "name": "BaseBdev2", 00:18:21.533 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:21.533 "is_configured": true, 00:18:21.533 "data_offset": 2048, 00:18:21.533 "data_size": 63488 00:18:21.533 }, 00:18:21.533 { 00:18:21.533 "name": "BaseBdev3", 00:18:21.533 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:21.533 "is_configured": true, 00:18:21.533 "data_offset": 2048, 00:18:21.533 "data_size": 63488 00:18:21.533 }, 00:18:21.533 { 00:18:21.533 "name": "BaseBdev4", 00:18:21.533 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:21.533 "is_configured": true, 00:18:21.533 "data_offset": 2048, 00:18:21.533 "data_size": 63488 00:18:21.533 } 00:18:21.533 ] 00:18:21.534 }' 00:18:21.534 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.793 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.793 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.793 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.793 11:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:22.731 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:22.731 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.731 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.731 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.731 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.731 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.731 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.731 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.731 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.731 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.731 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.732 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.732 "name": "raid_bdev1", 00:18:22.732 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:22.732 "strip_size_kb": 64, 00:18:22.732 "state": "online", 00:18:22.732 "raid_level": "raid5f", 00:18:22.732 "superblock": true, 00:18:22.732 "num_base_bdevs": 4, 00:18:22.732 "num_base_bdevs_discovered": 4, 00:18:22.732 "num_base_bdevs_operational": 4, 00:18:22.732 "process": { 00:18:22.732 "type": "rebuild", 00:18:22.732 "target": "spare", 00:18:22.732 "progress": { 00:18:22.732 "blocks": 65280, 00:18:22.732 "percent": 34 00:18:22.732 } 00:18:22.732 }, 00:18:22.732 "base_bdevs_list": [ 00:18:22.732 { 00:18:22.732 "name": "spare", 00:18:22.732 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:22.732 "is_configured": true, 00:18:22.732 "data_offset": 2048, 00:18:22.732 "data_size": 63488 00:18:22.732 }, 00:18:22.732 { 00:18:22.732 "name": "BaseBdev2", 00:18:22.732 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:22.732 "is_configured": true, 00:18:22.732 "data_offset": 2048, 00:18:22.732 "data_size": 63488 00:18:22.732 }, 00:18:22.732 { 00:18:22.732 "name": "BaseBdev3", 00:18:22.732 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:22.732 "is_configured": true, 00:18:22.732 "data_offset": 2048, 00:18:22.732 "data_size": 63488 00:18:22.732 }, 00:18:22.732 { 00:18:22.732 "name": "BaseBdev4", 00:18:22.732 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:22.732 "is_configured": true, 00:18:22.732 "data_offset": 2048, 00:18:22.732 "data_size": 63488 00:18:22.732 } 00:18:22.732 ] 00:18:22.732 }' 00:18:22.732 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.732 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.732 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.732 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.732 11:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:24.112 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:24.112 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.112 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.112 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.112 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.112 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.112 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.112 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.112 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.112 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.112 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.112 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.112 "name": "raid_bdev1", 00:18:24.112 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:24.112 "strip_size_kb": 64, 00:18:24.112 "state": "online", 00:18:24.112 "raid_level": "raid5f", 00:18:24.112 "superblock": true, 00:18:24.112 "num_base_bdevs": 4, 00:18:24.112 "num_base_bdevs_discovered": 4, 00:18:24.112 "num_base_bdevs_operational": 4, 00:18:24.112 "process": { 00:18:24.112 "type": "rebuild", 00:18:24.112 "target": "spare", 00:18:24.112 "progress": { 00:18:24.112 "blocks": 86400, 00:18:24.112 "percent": 45 00:18:24.112 } 00:18:24.112 }, 00:18:24.112 "base_bdevs_list": [ 00:18:24.112 { 00:18:24.112 "name": "spare", 00:18:24.112 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:24.112 "is_configured": true, 00:18:24.112 "data_offset": 2048, 00:18:24.112 "data_size": 63488 00:18:24.112 }, 00:18:24.112 { 00:18:24.112 "name": "BaseBdev2", 00:18:24.112 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:24.112 "is_configured": true, 00:18:24.112 "data_offset": 2048, 00:18:24.112 "data_size": 63488 00:18:24.112 }, 00:18:24.112 { 00:18:24.112 "name": "BaseBdev3", 00:18:24.112 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:24.112 "is_configured": true, 00:18:24.112 "data_offset": 2048, 00:18:24.112 "data_size": 63488 00:18:24.112 }, 00:18:24.112 { 00:18:24.112 "name": "BaseBdev4", 00:18:24.112 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:24.112 "is_configured": true, 00:18:24.112 "data_offset": 2048, 00:18:24.112 "data_size": 63488 00:18:24.112 } 00:18:24.112 ] 00:18:24.112 }' 00:18:24.112 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.113 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.113 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.113 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.113 11:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:25.051 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.051 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.051 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.051 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.051 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.051 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.051 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.051 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.051 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.051 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.051 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.051 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.051 "name": "raid_bdev1", 00:18:25.051 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:25.051 "strip_size_kb": 64, 00:18:25.051 "state": "online", 00:18:25.051 "raid_level": "raid5f", 00:18:25.051 "superblock": true, 00:18:25.051 "num_base_bdevs": 4, 00:18:25.051 "num_base_bdevs_discovered": 4, 00:18:25.051 "num_base_bdevs_operational": 4, 00:18:25.051 "process": { 00:18:25.051 "type": "rebuild", 00:18:25.051 "target": "spare", 00:18:25.051 "progress": { 00:18:25.051 "blocks": 109440, 00:18:25.051 "percent": 57 00:18:25.051 } 00:18:25.051 }, 00:18:25.052 "base_bdevs_list": [ 00:18:25.052 { 00:18:25.052 "name": "spare", 00:18:25.052 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:25.052 "is_configured": true, 00:18:25.052 "data_offset": 2048, 00:18:25.052 "data_size": 63488 00:18:25.052 }, 00:18:25.052 { 00:18:25.052 "name": "BaseBdev2", 00:18:25.052 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:25.052 "is_configured": true, 00:18:25.052 "data_offset": 2048, 00:18:25.052 "data_size": 63488 00:18:25.052 }, 00:18:25.052 { 00:18:25.052 "name": "BaseBdev3", 00:18:25.052 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:25.052 "is_configured": true, 00:18:25.052 "data_offset": 2048, 00:18:25.052 "data_size": 63488 00:18:25.052 }, 00:18:25.052 { 00:18:25.052 "name": "BaseBdev4", 00:18:25.052 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:25.052 "is_configured": true, 00:18:25.052 "data_offset": 2048, 00:18:25.052 "data_size": 63488 00:18:25.052 } 00:18:25.052 ] 00:18:25.052 }' 00:18:25.052 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.052 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.052 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.052 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.052 11:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.432 "name": "raid_bdev1", 00:18:26.432 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:26.432 "strip_size_kb": 64, 00:18:26.432 "state": "online", 00:18:26.432 "raid_level": "raid5f", 00:18:26.432 "superblock": true, 00:18:26.432 "num_base_bdevs": 4, 00:18:26.432 "num_base_bdevs_discovered": 4, 00:18:26.432 "num_base_bdevs_operational": 4, 00:18:26.432 "process": { 00:18:26.432 "type": "rebuild", 00:18:26.432 "target": "spare", 00:18:26.432 "progress": { 00:18:26.432 "blocks": 130560, 00:18:26.432 "percent": 68 00:18:26.432 } 00:18:26.432 }, 00:18:26.432 "base_bdevs_list": [ 00:18:26.432 { 00:18:26.432 "name": "spare", 00:18:26.432 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:26.432 "is_configured": true, 00:18:26.432 "data_offset": 2048, 00:18:26.432 "data_size": 63488 00:18:26.432 }, 00:18:26.432 { 00:18:26.432 "name": "BaseBdev2", 00:18:26.432 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:26.432 "is_configured": true, 00:18:26.432 "data_offset": 2048, 00:18:26.432 "data_size": 63488 00:18:26.432 }, 00:18:26.432 { 00:18:26.432 "name": "BaseBdev3", 00:18:26.432 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:26.432 "is_configured": true, 00:18:26.432 "data_offset": 2048, 00:18:26.432 "data_size": 63488 00:18:26.432 }, 00:18:26.432 { 00:18:26.432 "name": "BaseBdev4", 00:18:26.432 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:26.432 "is_configured": true, 00:18:26.432 "data_offset": 2048, 00:18:26.432 "data_size": 63488 00:18:26.432 } 00:18:26.432 ] 00:18:26.432 }' 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.432 11:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:27.371 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:27.371 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.371 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.371 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.371 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.371 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.371 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.371 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.371 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.371 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.371 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.371 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.371 "name": "raid_bdev1", 00:18:27.371 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:27.371 "strip_size_kb": 64, 00:18:27.371 "state": "online", 00:18:27.371 "raid_level": "raid5f", 00:18:27.371 "superblock": true, 00:18:27.371 "num_base_bdevs": 4, 00:18:27.371 "num_base_bdevs_discovered": 4, 00:18:27.371 "num_base_bdevs_operational": 4, 00:18:27.371 "process": { 00:18:27.371 "type": "rebuild", 00:18:27.371 "target": "spare", 00:18:27.371 "progress": { 00:18:27.371 "blocks": 151680, 00:18:27.371 "percent": 79 00:18:27.371 } 00:18:27.371 }, 00:18:27.371 "base_bdevs_list": [ 00:18:27.371 { 00:18:27.371 "name": "spare", 00:18:27.371 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:27.371 "is_configured": true, 00:18:27.371 "data_offset": 2048, 00:18:27.371 "data_size": 63488 00:18:27.371 }, 00:18:27.371 { 00:18:27.371 "name": "BaseBdev2", 00:18:27.371 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:27.371 "is_configured": true, 00:18:27.372 "data_offset": 2048, 00:18:27.372 "data_size": 63488 00:18:27.372 }, 00:18:27.372 { 00:18:27.372 "name": "BaseBdev3", 00:18:27.372 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:27.372 "is_configured": true, 00:18:27.372 "data_offset": 2048, 00:18:27.372 "data_size": 63488 00:18:27.372 }, 00:18:27.372 { 00:18:27.372 "name": "BaseBdev4", 00:18:27.372 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:27.372 "is_configured": true, 00:18:27.372 "data_offset": 2048, 00:18:27.372 "data_size": 63488 00:18:27.372 } 00:18:27.372 ] 00:18:27.372 }' 00:18:27.372 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.372 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.372 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.372 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.372 11:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.753 "name": "raid_bdev1", 00:18:28.753 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:28.753 "strip_size_kb": 64, 00:18:28.753 "state": "online", 00:18:28.753 "raid_level": "raid5f", 00:18:28.753 "superblock": true, 00:18:28.753 "num_base_bdevs": 4, 00:18:28.753 "num_base_bdevs_discovered": 4, 00:18:28.753 "num_base_bdevs_operational": 4, 00:18:28.753 "process": { 00:18:28.753 "type": "rebuild", 00:18:28.753 "target": "spare", 00:18:28.753 "progress": { 00:18:28.753 "blocks": 174720, 00:18:28.753 "percent": 91 00:18:28.753 } 00:18:28.753 }, 00:18:28.753 "base_bdevs_list": [ 00:18:28.753 { 00:18:28.753 "name": "spare", 00:18:28.753 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:28.753 "is_configured": true, 00:18:28.753 "data_offset": 2048, 00:18:28.753 "data_size": 63488 00:18:28.753 }, 00:18:28.753 { 00:18:28.753 "name": "BaseBdev2", 00:18:28.753 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:28.753 "is_configured": true, 00:18:28.753 "data_offset": 2048, 00:18:28.753 "data_size": 63488 00:18:28.753 }, 00:18:28.753 { 00:18:28.753 "name": "BaseBdev3", 00:18:28.753 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:28.753 "is_configured": true, 00:18:28.753 "data_offset": 2048, 00:18:28.753 "data_size": 63488 00:18:28.753 }, 00:18:28.753 { 00:18:28.753 "name": "BaseBdev4", 00:18:28.753 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:28.753 "is_configured": true, 00:18:28.753 "data_offset": 2048, 00:18:28.753 "data_size": 63488 00:18:28.753 } 00:18:28.753 ] 00:18:28.753 }' 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.753 11:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:29.323 [2024-11-05 11:34:28.464040] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:29.323 [2024-11-05 11:34:28.464119] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:29.323 [2024-11-05 11:34:28.464256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.582 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:29.582 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.582 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.582 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.582 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.582 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.582 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.582 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.582 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.582 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.582 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.582 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.582 "name": "raid_bdev1", 00:18:29.582 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:29.582 "strip_size_kb": 64, 00:18:29.582 "state": "online", 00:18:29.582 "raid_level": "raid5f", 00:18:29.582 "superblock": true, 00:18:29.582 "num_base_bdevs": 4, 00:18:29.582 "num_base_bdevs_discovered": 4, 00:18:29.582 "num_base_bdevs_operational": 4, 00:18:29.582 "base_bdevs_list": [ 00:18:29.582 { 00:18:29.582 "name": "spare", 00:18:29.582 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:29.582 "is_configured": true, 00:18:29.582 "data_offset": 2048, 00:18:29.582 "data_size": 63488 00:18:29.582 }, 00:18:29.582 { 00:18:29.582 "name": "BaseBdev2", 00:18:29.582 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:29.582 "is_configured": true, 00:18:29.582 "data_offset": 2048, 00:18:29.582 "data_size": 63488 00:18:29.582 }, 00:18:29.583 { 00:18:29.583 "name": "BaseBdev3", 00:18:29.583 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:29.583 "is_configured": true, 00:18:29.583 "data_offset": 2048, 00:18:29.583 "data_size": 63488 00:18:29.583 }, 00:18:29.583 { 00:18:29.583 "name": "BaseBdev4", 00:18:29.583 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:29.583 "is_configured": true, 00:18:29.583 "data_offset": 2048, 00:18:29.583 "data_size": 63488 00:18:29.583 } 00:18:29.583 ] 00:18:29.583 }' 00:18:29.583 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.583 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:29.583 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.843 "name": "raid_bdev1", 00:18:29.843 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:29.843 "strip_size_kb": 64, 00:18:29.843 "state": "online", 00:18:29.843 "raid_level": "raid5f", 00:18:29.843 "superblock": true, 00:18:29.843 "num_base_bdevs": 4, 00:18:29.843 "num_base_bdevs_discovered": 4, 00:18:29.843 "num_base_bdevs_operational": 4, 00:18:29.843 "base_bdevs_list": [ 00:18:29.843 { 00:18:29.843 "name": "spare", 00:18:29.843 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:29.843 "is_configured": true, 00:18:29.843 "data_offset": 2048, 00:18:29.843 "data_size": 63488 00:18:29.843 }, 00:18:29.843 { 00:18:29.843 "name": "BaseBdev2", 00:18:29.843 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:29.843 "is_configured": true, 00:18:29.843 "data_offset": 2048, 00:18:29.843 "data_size": 63488 00:18:29.843 }, 00:18:29.843 { 00:18:29.843 "name": "BaseBdev3", 00:18:29.843 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:29.843 "is_configured": true, 00:18:29.843 "data_offset": 2048, 00:18:29.843 "data_size": 63488 00:18:29.843 }, 00:18:29.843 { 00:18:29.843 "name": "BaseBdev4", 00:18:29.843 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:29.843 "is_configured": true, 00:18:29.843 "data_offset": 2048, 00:18:29.843 "data_size": 63488 00:18:29.843 } 00:18:29.843 ] 00:18:29.843 }' 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:29.843 11:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.843 "name": "raid_bdev1", 00:18:29.843 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:29.843 "strip_size_kb": 64, 00:18:29.843 "state": "online", 00:18:29.843 "raid_level": "raid5f", 00:18:29.843 "superblock": true, 00:18:29.843 "num_base_bdevs": 4, 00:18:29.843 "num_base_bdevs_discovered": 4, 00:18:29.843 "num_base_bdevs_operational": 4, 00:18:29.843 "base_bdevs_list": [ 00:18:29.843 { 00:18:29.843 "name": "spare", 00:18:29.843 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:29.843 "is_configured": true, 00:18:29.843 "data_offset": 2048, 00:18:29.843 "data_size": 63488 00:18:29.843 }, 00:18:29.843 { 00:18:29.843 "name": "BaseBdev2", 00:18:29.843 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:29.843 "is_configured": true, 00:18:29.843 "data_offset": 2048, 00:18:29.843 "data_size": 63488 00:18:29.843 }, 00:18:29.843 { 00:18:29.843 "name": "BaseBdev3", 00:18:29.843 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:29.843 "is_configured": true, 00:18:29.843 "data_offset": 2048, 00:18:29.843 "data_size": 63488 00:18:29.843 }, 00:18:29.843 { 00:18:29.843 "name": "BaseBdev4", 00:18:29.843 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:29.843 "is_configured": true, 00:18:29.843 "data_offset": 2048, 00:18:29.843 "data_size": 63488 00:18:29.843 } 00:18:29.843 ] 00:18:29.843 }' 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.843 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.412 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:30.412 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.412 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.412 [2024-11-05 11:34:29.442913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.412 [2024-11-05 11:34:29.442946] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.412 [2024-11-05 11:34:29.443049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.412 [2024-11-05 11:34:29.443156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.412 [2024-11-05 11:34:29.443186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:30.412 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.412 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:30.412 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.412 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.412 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.413 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.413 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:30.413 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:30.413 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:30.413 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:30.413 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:30.413 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:30.413 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:30.413 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:30.413 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:30.413 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:30.413 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:30.413 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:30.413 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:30.413 /dev/nbd0 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:30.672 1+0 records in 00:18:30.672 1+0 records out 00:18:30.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371521 s, 11.0 MB/s 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:30.672 /dev/nbd1 00:18:30.672 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:30.673 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:30.673 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:30.673 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:30.673 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:30.673 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:30.673 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:30.673 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:30.673 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:30.940 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:30.940 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:30.940 1+0 records in 00:18:30.940 1+0 records out 00:18:30.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453315 s, 9.0 MB/s 00:18:30.940 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:30.940 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:30.940 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:30.940 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:30.940 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:30.940 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:30.940 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:30.940 11:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:30.940 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:30.940 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:30.940 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:30.940 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:30.940 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:30.940 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:30.940 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:31.219 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:31.219 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:31.219 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:31.219 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:31.219 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:31.219 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:31.219 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:31.219 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:31.219 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:31.219 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:31.492 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.493 [2024-11-05 11:34:30.571572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:31.493 [2024-11-05 11:34:30.571639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.493 [2024-11-05 11:34:30.571667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:31.493 [2024-11-05 11:34:30.571679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.493 [2024-11-05 11:34:30.574007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.493 [2024-11-05 11:34:30.574041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:31.493 [2024-11-05 11:34:30.574122] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:31.493 [2024-11-05 11:34:30.574188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:31.493 [2024-11-05 11:34:30.574357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.493 [2024-11-05 11:34:30.574453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:31.493 [2024-11-05 11:34:30.574534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:31.493 spare 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.493 [2024-11-05 11:34:30.674427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:31.493 [2024-11-05 11:34:30.674458] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:31.493 [2024-11-05 11:34:30.674711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:18:31.493 [2024-11-05 11:34:30.681291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:31.493 [2024-11-05 11:34:30.681314] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:31.493 [2024-11-05 11:34:30.681480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.493 "name": "raid_bdev1", 00:18:31.493 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:31.493 "strip_size_kb": 64, 00:18:31.493 "state": "online", 00:18:31.493 "raid_level": "raid5f", 00:18:31.493 "superblock": true, 00:18:31.493 "num_base_bdevs": 4, 00:18:31.493 "num_base_bdevs_discovered": 4, 00:18:31.493 "num_base_bdevs_operational": 4, 00:18:31.493 "base_bdevs_list": [ 00:18:31.493 { 00:18:31.493 "name": "spare", 00:18:31.493 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:31.493 "is_configured": true, 00:18:31.493 "data_offset": 2048, 00:18:31.493 "data_size": 63488 00:18:31.493 }, 00:18:31.493 { 00:18:31.493 "name": "BaseBdev2", 00:18:31.493 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:31.493 "is_configured": true, 00:18:31.493 "data_offset": 2048, 00:18:31.493 "data_size": 63488 00:18:31.493 }, 00:18:31.493 { 00:18:31.493 "name": "BaseBdev3", 00:18:31.493 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:31.493 "is_configured": true, 00:18:31.493 "data_offset": 2048, 00:18:31.493 "data_size": 63488 00:18:31.493 }, 00:18:31.493 { 00:18:31.493 "name": "BaseBdev4", 00:18:31.493 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:31.493 "is_configured": true, 00:18:31.493 "data_offset": 2048, 00:18:31.493 "data_size": 63488 00:18:31.493 } 00:18:31.493 ] 00:18:31.493 }' 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.493 11:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.062 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.062 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.062 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:32.062 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:32.062 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.062 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.062 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.062 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.062 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.062 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.062 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.062 "name": "raid_bdev1", 00:18:32.062 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:32.062 "strip_size_kb": 64, 00:18:32.062 "state": "online", 00:18:32.062 "raid_level": "raid5f", 00:18:32.062 "superblock": true, 00:18:32.062 "num_base_bdevs": 4, 00:18:32.062 "num_base_bdevs_discovered": 4, 00:18:32.062 "num_base_bdevs_operational": 4, 00:18:32.062 "base_bdevs_list": [ 00:18:32.063 { 00:18:32.063 "name": "spare", 00:18:32.063 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:32.063 "is_configured": true, 00:18:32.063 "data_offset": 2048, 00:18:32.063 "data_size": 63488 00:18:32.063 }, 00:18:32.063 { 00:18:32.063 "name": "BaseBdev2", 00:18:32.063 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:32.063 "is_configured": true, 00:18:32.063 "data_offset": 2048, 00:18:32.063 "data_size": 63488 00:18:32.063 }, 00:18:32.063 { 00:18:32.063 "name": "BaseBdev3", 00:18:32.063 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:32.063 "is_configured": true, 00:18:32.063 "data_offset": 2048, 00:18:32.063 "data_size": 63488 00:18:32.063 }, 00:18:32.063 { 00:18:32.063 "name": "BaseBdev4", 00:18:32.063 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:32.063 "is_configured": true, 00:18:32.063 "data_offset": 2048, 00:18:32.063 "data_size": 63488 00:18:32.063 } 00:18:32.063 ] 00:18:32.063 }' 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.063 [2024-11-05 11:34:31.324049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.063 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.323 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.323 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.323 "name": "raid_bdev1", 00:18:32.323 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:32.323 "strip_size_kb": 64, 00:18:32.323 "state": "online", 00:18:32.323 "raid_level": "raid5f", 00:18:32.323 "superblock": true, 00:18:32.323 "num_base_bdevs": 4, 00:18:32.323 "num_base_bdevs_discovered": 3, 00:18:32.323 "num_base_bdevs_operational": 3, 00:18:32.323 "base_bdevs_list": [ 00:18:32.323 { 00:18:32.323 "name": null, 00:18:32.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.323 "is_configured": false, 00:18:32.323 "data_offset": 0, 00:18:32.323 "data_size": 63488 00:18:32.323 }, 00:18:32.323 { 00:18:32.323 "name": "BaseBdev2", 00:18:32.323 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:32.323 "is_configured": true, 00:18:32.323 "data_offset": 2048, 00:18:32.323 "data_size": 63488 00:18:32.323 }, 00:18:32.323 { 00:18:32.323 "name": "BaseBdev3", 00:18:32.323 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:32.323 "is_configured": true, 00:18:32.323 "data_offset": 2048, 00:18:32.323 "data_size": 63488 00:18:32.323 }, 00:18:32.323 { 00:18:32.323 "name": "BaseBdev4", 00:18:32.323 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:32.323 "is_configured": true, 00:18:32.323 "data_offset": 2048, 00:18:32.323 "data_size": 63488 00:18:32.323 } 00:18:32.323 ] 00:18:32.323 }' 00:18:32.323 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.323 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.583 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:32.583 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.583 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.583 [2024-11-05 11:34:31.791281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:32.583 [2024-11-05 11:34:31.791423] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:32.583 [2024-11-05 11:34:31.791441] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:32.583 [2024-11-05 11:34:31.791475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:32.583 [2024-11-05 11:34:31.805414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:18:32.583 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.583 11:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:32.583 [2024-11-05 11:34:31.814004] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.964 "name": "raid_bdev1", 00:18:33.964 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:33.964 "strip_size_kb": 64, 00:18:33.964 "state": "online", 00:18:33.964 "raid_level": "raid5f", 00:18:33.964 "superblock": true, 00:18:33.964 "num_base_bdevs": 4, 00:18:33.964 "num_base_bdevs_discovered": 4, 00:18:33.964 "num_base_bdevs_operational": 4, 00:18:33.964 "process": { 00:18:33.964 "type": "rebuild", 00:18:33.964 "target": "spare", 00:18:33.964 "progress": { 00:18:33.964 "blocks": 19200, 00:18:33.964 "percent": 10 00:18:33.964 } 00:18:33.964 }, 00:18:33.964 "base_bdevs_list": [ 00:18:33.964 { 00:18:33.964 "name": "spare", 00:18:33.964 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:33.964 "is_configured": true, 00:18:33.964 "data_offset": 2048, 00:18:33.964 "data_size": 63488 00:18:33.964 }, 00:18:33.964 { 00:18:33.964 "name": "BaseBdev2", 00:18:33.964 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:33.964 "is_configured": true, 00:18:33.964 "data_offset": 2048, 00:18:33.964 "data_size": 63488 00:18:33.964 }, 00:18:33.964 { 00:18:33.964 "name": "BaseBdev3", 00:18:33.964 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:33.964 "is_configured": true, 00:18:33.964 "data_offset": 2048, 00:18:33.964 "data_size": 63488 00:18:33.964 }, 00:18:33.964 { 00:18:33.964 "name": "BaseBdev4", 00:18:33.964 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:33.964 "is_configured": true, 00:18:33.964 "data_offset": 2048, 00:18:33.964 "data_size": 63488 00:18:33.964 } 00:18:33.964 ] 00:18:33.964 }' 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.964 11:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.964 [2024-11-05 11:34:32.964541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.964 [2024-11-05 11:34:33.019309] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:33.964 [2024-11-05 11:34:33.019367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.964 [2024-11-05 11:34:33.019382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.964 [2024-11-05 11:34:33.019390] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.964 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.964 "name": "raid_bdev1", 00:18:33.964 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:33.964 "strip_size_kb": 64, 00:18:33.964 "state": "online", 00:18:33.964 "raid_level": "raid5f", 00:18:33.964 "superblock": true, 00:18:33.964 "num_base_bdevs": 4, 00:18:33.964 "num_base_bdevs_discovered": 3, 00:18:33.964 "num_base_bdevs_operational": 3, 00:18:33.964 "base_bdevs_list": [ 00:18:33.964 { 00:18:33.965 "name": null, 00:18:33.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.965 "is_configured": false, 00:18:33.965 "data_offset": 0, 00:18:33.965 "data_size": 63488 00:18:33.965 }, 00:18:33.965 { 00:18:33.965 "name": "BaseBdev2", 00:18:33.965 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:33.965 "is_configured": true, 00:18:33.965 "data_offset": 2048, 00:18:33.965 "data_size": 63488 00:18:33.965 }, 00:18:33.965 { 00:18:33.965 "name": "BaseBdev3", 00:18:33.965 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:33.965 "is_configured": true, 00:18:33.965 "data_offset": 2048, 00:18:33.965 "data_size": 63488 00:18:33.965 }, 00:18:33.965 { 00:18:33.965 "name": "BaseBdev4", 00:18:33.965 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:33.965 "is_configured": true, 00:18:33.965 "data_offset": 2048, 00:18:33.965 "data_size": 63488 00:18:33.965 } 00:18:33.965 ] 00:18:33.965 }' 00:18:33.965 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.965 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.224 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:34.224 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.224 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.224 [2024-11-05 11:34:33.498956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:34.224 [2024-11-05 11:34:33.499013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.224 [2024-11-05 11:34:33.499039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:34.224 [2024-11-05 11:34:33.499051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.224 [2024-11-05 11:34:33.499539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.224 [2024-11-05 11:34:33.499570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:34.224 [2024-11-05 11:34:33.499653] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:34.224 [2024-11-05 11:34:33.499667] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:34.224 [2024-11-05 11:34:33.499679] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:34.224 [2024-11-05 11:34:33.499705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.484 [2024-11-05 11:34:33.514416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:18:34.484 spare 00:18:34.484 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.484 11:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:34.484 [2024-11-05 11:34:33.523234] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:35.423 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.423 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.423 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.423 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.423 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.423 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.423 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.423 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.423 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.423 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.423 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.423 "name": "raid_bdev1", 00:18:35.423 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:35.423 "strip_size_kb": 64, 00:18:35.423 "state": "online", 00:18:35.423 "raid_level": "raid5f", 00:18:35.423 "superblock": true, 00:18:35.423 "num_base_bdevs": 4, 00:18:35.423 "num_base_bdevs_discovered": 4, 00:18:35.423 "num_base_bdevs_operational": 4, 00:18:35.423 "process": { 00:18:35.423 "type": "rebuild", 00:18:35.423 "target": "spare", 00:18:35.423 "progress": { 00:18:35.423 "blocks": 19200, 00:18:35.423 "percent": 10 00:18:35.423 } 00:18:35.423 }, 00:18:35.423 "base_bdevs_list": [ 00:18:35.423 { 00:18:35.423 "name": "spare", 00:18:35.423 "uuid": "c867372b-0182-5be2-92bb-254c22e8c841", 00:18:35.423 "is_configured": true, 00:18:35.423 "data_offset": 2048, 00:18:35.423 "data_size": 63488 00:18:35.423 }, 00:18:35.423 { 00:18:35.423 "name": "BaseBdev2", 00:18:35.423 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:35.423 "is_configured": true, 00:18:35.423 "data_offset": 2048, 00:18:35.423 "data_size": 63488 00:18:35.423 }, 00:18:35.423 { 00:18:35.423 "name": "BaseBdev3", 00:18:35.424 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:35.424 "is_configured": true, 00:18:35.424 "data_offset": 2048, 00:18:35.424 "data_size": 63488 00:18:35.424 }, 00:18:35.424 { 00:18:35.424 "name": "BaseBdev4", 00:18:35.424 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:35.424 "is_configured": true, 00:18:35.424 "data_offset": 2048, 00:18:35.424 "data_size": 63488 00:18:35.424 } 00:18:35.424 ] 00:18:35.424 }' 00:18:35.424 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.424 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.424 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.424 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.424 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:35.424 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.424 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.424 [2024-11-05 11:34:34.665915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.683 [2024-11-05 11:34:34.728624] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:35.683 [2024-11-05 11:34:34.728674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.683 [2024-11-05 11:34:34.728693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.684 [2024-11-05 11:34:34.728701] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.684 "name": "raid_bdev1", 00:18:35.684 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:35.684 "strip_size_kb": 64, 00:18:35.684 "state": "online", 00:18:35.684 "raid_level": "raid5f", 00:18:35.684 "superblock": true, 00:18:35.684 "num_base_bdevs": 4, 00:18:35.684 "num_base_bdevs_discovered": 3, 00:18:35.684 "num_base_bdevs_operational": 3, 00:18:35.684 "base_bdevs_list": [ 00:18:35.684 { 00:18:35.684 "name": null, 00:18:35.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.684 "is_configured": false, 00:18:35.684 "data_offset": 0, 00:18:35.684 "data_size": 63488 00:18:35.684 }, 00:18:35.684 { 00:18:35.684 "name": "BaseBdev2", 00:18:35.684 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:35.684 "is_configured": true, 00:18:35.684 "data_offset": 2048, 00:18:35.684 "data_size": 63488 00:18:35.684 }, 00:18:35.684 { 00:18:35.684 "name": "BaseBdev3", 00:18:35.684 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:35.684 "is_configured": true, 00:18:35.684 "data_offset": 2048, 00:18:35.684 "data_size": 63488 00:18:35.684 }, 00:18:35.684 { 00:18:35.684 "name": "BaseBdev4", 00:18:35.684 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:35.684 "is_configured": true, 00:18:35.684 "data_offset": 2048, 00:18:35.684 "data_size": 63488 00:18:35.684 } 00:18:35.684 ] 00:18:35.684 }' 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.684 11:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.944 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:35.944 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.944 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:35.944 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:35.944 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.944 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.944 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.944 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.944 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.203 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.204 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.204 "name": "raid_bdev1", 00:18:36.204 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:36.204 "strip_size_kb": 64, 00:18:36.204 "state": "online", 00:18:36.204 "raid_level": "raid5f", 00:18:36.204 "superblock": true, 00:18:36.204 "num_base_bdevs": 4, 00:18:36.204 "num_base_bdevs_discovered": 3, 00:18:36.204 "num_base_bdevs_operational": 3, 00:18:36.204 "base_bdevs_list": [ 00:18:36.204 { 00:18:36.204 "name": null, 00:18:36.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.204 "is_configured": false, 00:18:36.204 "data_offset": 0, 00:18:36.204 "data_size": 63488 00:18:36.204 }, 00:18:36.204 { 00:18:36.204 "name": "BaseBdev2", 00:18:36.204 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:36.204 "is_configured": true, 00:18:36.204 "data_offset": 2048, 00:18:36.204 "data_size": 63488 00:18:36.204 }, 00:18:36.204 { 00:18:36.204 "name": "BaseBdev3", 00:18:36.204 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:36.204 "is_configured": true, 00:18:36.204 "data_offset": 2048, 00:18:36.204 "data_size": 63488 00:18:36.204 }, 00:18:36.204 { 00:18:36.204 "name": "BaseBdev4", 00:18:36.204 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:36.204 "is_configured": true, 00:18:36.204 "data_offset": 2048, 00:18:36.204 "data_size": 63488 00:18:36.204 } 00:18:36.204 ] 00:18:36.204 }' 00:18:36.204 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.204 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:36.204 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.204 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:36.204 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:36.204 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.204 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.204 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.204 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:36.204 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.204 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.204 [2024-11-05 11:34:35.377433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:36.204 [2024-11-05 11:34:35.377483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.204 [2024-11-05 11:34:35.377503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:36.204 [2024-11-05 11:34:35.377512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.204 [2024-11-05 11:34:35.377960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.204 [2024-11-05 11:34:35.377986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:36.204 [2024-11-05 11:34:35.378057] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:36.204 [2024-11-05 11:34:35.378070] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:36.204 [2024-11-05 11:34:35.378084] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:36.204 [2024-11-05 11:34:35.378094] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:36.204 BaseBdev1 00:18:36.204 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.204 11:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:37.143 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:37.143 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.143 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.143 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.143 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.143 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:37.143 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.143 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.143 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.143 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.143 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.143 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.143 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.143 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.143 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.402 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.402 "name": "raid_bdev1", 00:18:37.402 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:37.402 "strip_size_kb": 64, 00:18:37.402 "state": "online", 00:18:37.402 "raid_level": "raid5f", 00:18:37.402 "superblock": true, 00:18:37.402 "num_base_bdevs": 4, 00:18:37.402 "num_base_bdevs_discovered": 3, 00:18:37.402 "num_base_bdevs_operational": 3, 00:18:37.402 "base_bdevs_list": [ 00:18:37.402 { 00:18:37.402 "name": null, 00:18:37.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.402 "is_configured": false, 00:18:37.402 "data_offset": 0, 00:18:37.402 "data_size": 63488 00:18:37.402 }, 00:18:37.402 { 00:18:37.402 "name": "BaseBdev2", 00:18:37.402 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:37.402 "is_configured": true, 00:18:37.402 "data_offset": 2048, 00:18:37.402 "data_size": 63488 00:18:37.402 }, 00:18:37.402 { 00:18:37.402 "name": "BaseBdev3", 00:18:37.402 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:37.402 "is_configured": true, 00:18:37.402 "data_offset": 2048, 00:18:37.402 "data_size": 63488 00:18:37.402 }, 00:18:37.402 { 00:18:37.402 "name": "BaseBdev4", 00:18:37.402 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:37.402 "is_configured": true, 00:18:37.402 "data_offset": 2048, 00:18:37.402 "data_size": 63488 00:18:37.402 } 00:18:37.402 ] 00:18:37.402 }' 00:18:37.403 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.403 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.662 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.662 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.662 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:37.662 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:37.662 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.662 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.662 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.662 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.662 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.662 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.662 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.662 "name": "raid_bdev1", 00:18:37.662 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:37.662 "strip_size_kb": 64, 00:18:37.662 "state": "online", 00:18:37.662 "raid_level": "raid5f", 00:18:37.662 "superblock": true, 00:18:37.662 "num_base_bdevs": 4, 00:18:37.662 "num_base_bdevs_discovered": 3, 00:18:37.662 "num_base_bdevs_operational": 3, 00:18:37.662 "base_bdevs_list": [ 00:18:37.662 { 00:18:37.662 "name": null, 00:18:37.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.662 "is_configured": false, 00:18:37.662 "data_offset": 0, 00:18:37.662 "data_size": 63488 00:18:37.662 }, 00:18:37.662 { 00:18:37.662 "name": "BaseBdev2", 00:18:37.662 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:37.662 "is_configured": true, 00:18:37.662 "data_offset": 2048, 00:18:37.662 "data_size": 63488 00:18:37.662 }, 00:18:37.662 { 00:18:37.662 "name": "BaseBdev3", 00:18:37.662 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:37.662 "is_configured": true, 00:18:37.662 "data_offset": 2048, 00:18:37.662 "data_size": 63488 00:18:37.662 }, 00:18:37.662 { 00:18:37.662 "name": "BaseBdev4", 00:18:37.662 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:37.662 "is_configured": true, 00:18:37.662 "data_offset": 2048, 00:18:37.662 "data_size": 63488 00:18:37.662 } 00:18:37.662 ] 00:18:37.662 }' 00:18:37.662 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.662 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:37.662 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.922 [2024-11-05 11:34:36.966772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:37.922 [2024-11-05 11:34:36.966912] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:37.922 [2024-11-05 11:34:36.966933] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:37.922 request: 00:18:37.922 { 00:18:37.922 "base_bdev": "BaseBdev1", 00:18:37.922 "raid_bdev": "raid_bdev1", 00:18:37.922 "method": "bdev_raid_add_base_bdev", 00:18:37.922 "req_id": 1 00:18:37.922 } 00:18:37.922 Got JSON-RPC error response 00:18:37.922 response: 00:18:37.922 { 00:18:37.922 "code": -22, 00:18:37.922 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:37.922 } 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:37.922 11:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:38.861 11:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:38.861 11:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.861 11:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.861 11:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.861 11:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.861 11:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:38.861 11:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.861 11:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.861 11:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.861 11:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.861 11:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.861 11:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.861 11:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.861 11:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.861 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.861 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.861 "name": "raid_bdev1", 00:18:38.861 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:38.861 "strip_size_kb": 64, 00:18:38.861 "state": "online", 00:18:38.861 "raid_level": "raid5f", 00:18:38.861 "superblock": true, 00:18:38.861 "num_base_bdevs": 4, 00:18:38.861 "num_base_bdevs_discovered": 3, 00:18:38.861 "num_base_bdevs_operational": 3, 00:18:38.861 "base_bdevs_list": [ 00:18:38.861 { 00:18:38.861 "name": null, 00:18:38.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.861 "is_configured": false, 00:18:38.861 "data_offset": 0, 00:18:38.861 "data_size": 63488 00:18:38.861 }, 00:18:38.861 { 00:18:38.861 "name": "BaseBdev2", 00:18:38.861 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:38.861 "is_configured": true, 00:18:38.861 "data_offset": 2048, 00:18:38.861 "data_size": 63488 00:18:38.861 }, 00:18:38.861 { 00:18:38.861 "name": "BaseBdev3", 00:18:38.861 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:38.861 "is_configured": true, 00:18:38.861 "data_offset": 2048, 00:18:38.861 "data_size": 63488 00:18:38.861 }, 00:18:38.861 { 00:18:38.861 "name": "BaseBdev4", 00:18:38.861 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:38.861 "is_configured": true, 00:18:38.861 "data_offset": 2048, 00:18:38.861 "data_size": 63488 00:18:38.861 } 00:18:38.861 ] 00:18:38.861 }' 00:18:38.861 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.861 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.431 "name": "raid_bdev1", 00:18:39.431 "uuid": "946eab8b-1359-4467-89a2-e34fb7b0f2e7", 00:18:39.431 "strip_size_kb": 64, 00:18:39.431 "state": "online", 00:18:39.431 "raid_level": "raid5f", 00:18:39.431 "superblock": true, 00:18:39.431 "num_base_bdevs": 4, 00:18:39.431 "num_base_bdevs_discovered": 3, 00:18:39.431 "num_base_bdevs_operational": 3, 00:18:39.431 "base_bdevs_list": [ 00:18:39.431 { 00:18:39.431 "name": null, 00:18:39.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.431 "is_configured": false, 00:18:39.431 "data_offset": 0, 00:18:39.431 "data_size": 63488 00:18:39.431 }, 00:18:39.431 { 00:18:39.431 "name": "BaseBdev2", 00:18:39.431 "uuid": "5d7a83fe-bcf0-5a73-834d-e2b092c53655", 00:18:39.431 "is_configured": true, 00:18:39.431 "data_offset": 2048, 00:18:39.431 "data_size": 63488 00:18:39.431 }, 00:18:39.431 { 00:18:39.431 "name": "BaseBdev3", 00:18:39.431 "uuid": "d315088d-237f-565a-b543-cba3f7fc799f", 00:18:39.431 "is_configured": true, 00:18:39.431 "data_offset": 2048, 00:18:39.431 "data_size": 63488 00:18:39.431 }, 00:18:39.431 { 00:18:39.431 "name": "BaseBdev4", 00:18:39.431 "uuid": "db7b0fd9-9b43-541a-963f-ef35dcbeac6d", 00:18:39.431 "is_configured": true, 00:18:39.431 "data_offset": 2048, 00:18:39.431 "data_size": 63488 00:18:39.431 } 00:18:39.431 ] 00:18:39.431 }' 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85127 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 85127 ']' 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 85127 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85127 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:39.431 killing process with pid 85127 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85127' 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 85127 00:18:39.431 Received shutdown signal, test time was about 60.000000 seconds 00:18:39.431 00:18:39.431 Latency(us) 00:18:39.431 [2024-11-05T11:34:38.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.431 [2024-11-05T11:34:38.705Z] =================================================================================================================== 00:18:39.431 [2024-11-05T11:34:38.705Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:39.431 [2024-11-05 11:34:38.625261] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:39.431 [2024-11-05 11:34:38.625369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.431 [2024-11-05 11:34:38.625440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.431 11:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 85127 00:18:39.431 [2024-11-05 11:34:38.625453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:40.001 [2024-11-05 11:34:39.079750] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:40.952 11:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:40.952 00:18:40.952 real 0m26.699s 00:18:40.952 user 0m33.532s 00:18:40.952 sys 0m3.026s 00:18:40.952 11:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:40.952 11:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.952 ************************************ 00:18:40.952 END TEST raid5f_rebuild_test_sb 00:18:40.952 ************************************ 00:18:40.952 11:34:40 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:40.952 11:34:40 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:40.952 11:34:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:40.952 11:34:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:40.952 11:34:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:40.952 ************************************ 00:18:40.952 START TEST raid_state_function_test_sb_4k 00:18:40.952 ************************************ 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85934 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85934' 00:18:40.952 Process raid pid: 85934 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85934 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 85934 ']' 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:40.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:40.952 11:34:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.212 [2024-11-05 11:34:40.282423] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:18:41.212 [2024-11-05 11:34:40.282533] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.212 [2024-11-05 11:34:40.455038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.472 [2024-11-05 11:34:40.561966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.732 [2024-11-05 11:34:40.757396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:41.732 [2024-11-05 11:34:40.757431] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.992 [2024-11-05 11:34:41.109668] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:41.992 [2024-11-05 11:34:41.109720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:41.992 [2024-11-05 11:34:41.109729] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:41.992 [2024-11-05 11:34:41.109738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.992 "name": "Existed_Raid", 00:18:41.992 "uuid": "e9d78d3e-a7ac-45ae-a465-bddecd95e0ad", 00:18:41.992 "strip_size_kb": 0, 00:18:41.992 "state": "configuring", 00:18:41.992 "raid_level": "raid1", 00:18:41.992 "superblock": true, 00:18:41.992 "num_base_bdevs": 2, 00:18:41.992 "num_base_bdevs_discovered": 0, 00:18:41.992 "num_base_bdevs_operational": 2, 00:18:41.992 "base_bdevs_list": [ 00:18:41.992 { 00:18:41.992 "name": "BaseBdev1", 00:18:41.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.992 "is_configured": false, 00:18:41.992 "data_offset": 0, 00:18:41.992 "data_size": 0 00:18:41.992 }, 00:18:41.992 { 00:18:41.992 "name": "BaseBdev2", 00:18:41.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.992 "is_configured": false, 00:18:41.992 "data_offset": 0, 00:18:41.992 "data_size": 0 00:18:41.992 } 00:18:41.992 ] 00:18:41.992 }' 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.992 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.562 [2024-11-05 11:34:41.576823] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:42.562 [2024-11-05 11:34:41.576858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.562 [2024-11-05 11:34:41.588802] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:42.562 [2024-11-05 11:34:41.588839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:42.562 [2024-11-05 11:34:41.588846] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:42.562 [2024-11-05 11:34:41.588857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.562 [2024-11-05 11:34:41.634224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:42.562 BaseBdev1 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.562 [ 00:18:42.562 { 00:18:42.562 "name": "BaseBdev1", 00:18:42.562 "aliases": [ 00:18:42.562 "1a3b31fd-ac29-44d5-adf2-ad6888759b4e" 00:18:42.562 ], 00:18:42.562 "product_name": "Malloc disk", 00:18:42.562 "block_size": 4096, 00:18:42.562 "num_blocks": 8192, 00:18:42.562 "uuid": "1a3b31fd-ac29-44d5-adf2-ad6888759b4e", 00:18:42.562 "assigned_rate_limits": { 00:18:42.562 "rw_ios_per_sec": 0, 00:18:42.562 "rw_mbytes_per_sec": 0, 00:18:42.562 "r_mbytes_per_sec": 0, 00:18:42.562 "w_mbytes_per_sec": 0 00:18:42.562 }, 00:18:42.562 "claimed": true, 00:18:42.562 "claim_type": "exclusive_write", 00:18:42.562 "zoned": false, 00:18:42.562 "supported_io_types": { 00:18:42.562 "read": true, 00:18:42.562 "write": true, 00:18:42.562 "unmap": true, 00:18:42.562 "flush": true, 00:18:42.562 "reset": true, 00:18:42.562 "nvme_admin": false, 00:18:42.562 "nvme_io": false, 00:18:42.562 "nvme_io_md": false, 00:18:42.562 "write_zeroes": true, 00:18:42.562 "zcopy": true, 00:18:42.562 "get_zone_info": false, 00:18:42.562 "zone_management": false, 00:18:42.562 "zone_append": false, 00:18:42.562 "compare": false, 00:18:42.562 "compare_and_write": false, 00:18:42.562 "abort": true, 00:18:42.562 "seek_hole": false, 00:18:42.562 "seek_data": false, 00:18:42.562 "copy": true, 00:18:42.562 "nvme_iov_md": false 00:18:42.562 }, 00:18:42.562 "memory_domains": [ 00:18:42.562 { 00:18:42.562 "dma_device_id": "system", 00:18:42.562 "dma_device_type": 1 00:18:42.562 }, 00:18:42.562 { 00:18:42.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.562 "dma_device_type": 2 00:18:42.562 } 00:18:42.562 ], 00:18:42.562 "driver_specific": {} 00:18:42.562 } 00:18:42.562 ] 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.562 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.562 "name": "Existed_Raid", 00:18:42.562 "uuid": "eb2c696c-500a-47cd-a11b-d2708c4159d1", 00:18:42.562 "strip_size_kb": 0, 00:18:42.562 "state": "configuring", 00:18:42.562 "raid_level": "raid1", 00:18:42.562 "superblock": true, 00:18:42.562 "num_base_bdevs": 2, 00:18:42.562 "num_base_bdevs_discovered": 1, 00:18:42.562 "num_base_bdevs_operational": 2, 00:18:42.562 "base_bdevs_list": [ 00:18:42.562 { 00:18:42.562 "name": "BaseBdev1", 00:18:42.562 "uuid": "1a3b31fd-ac29-44d5-adf2-ad6888759b4e", 00:18:42.562 "is_configured": true, 00:18:42.562 "data_offset": 256, 00:18:42.562 "data_size": 7936 00:18:42.562 }, 00:18:42.562 { 00:18:42.563 "name": "BaseBdev2", 00:18:42.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.563 "is_configured": false, 00:18:42.563 "data_offset": 0, 00:18:42.563 "data_size": 0 00:18:42.563 } 00:18:42.563 ] 00:18:42.563 }' 00:18:42.563 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.563 11:34:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.132 [2024-11-05 11:34:42.141353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:43.132 [2024-11-05 11:34:42.141391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.132 [2024-11-05 11:34:42.153381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:43.132 [2024-11-05 11:34:42.155088] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:43.132 [2024-11-05 11:34:42.155141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.132 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.132 "name": "Existed_Raid", 00:18:43.132 "uuid": "6126cf32-7204-4799-86a7-4510bb6f35cc", 00:18:43.132 "strip_size_kb": 0, 00:18:43.132 "state": "configuring", 00:18:43.132 "raid_level": "raid1", 00:18:43.132 "superblock": true, 00:18:43.132 "num_base_bdevs": 2, 00:18:43.132 "num_base_bdevs_discovered": 1, 00:18:43.132 "num_base_bdevs_operational": 2, 00:18:43.132 "base_bdevs_list": [ 00:18:43.132 { 00:18:43.132 "name": "BaseBdev1", 00:18:43.133 "uuid": "1a3b31fd-ac29-44d5-adf2-ad6888759b4e", 00:18:43.133 "is_configured": true, 00:18:43.133 "data_offset": 256, 00:18:43.133 "data_size": 7936 00:18:43.133 }, 00:18:43.133 { 00:18:43.133 "name": "BaseBdev2", 00:18:43.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.133 "is_configured": false, 00:18:43.133 "data_offset": 0, 00:18:43.133 "data_size": 0 00:18:43.133 } 00:18:43.133 ] 00:18:43.133 }' 00:18:43.133 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.133 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.392 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:43.392 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.392 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.653 [2024-11-05 11:34:42.682418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:43.653 [2024-11-05 11:34:42.682675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:43.653 [2024-11-05 11:34:42.682696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:43.653 [2024-11-05 11:34:42.682951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:43.653 BaseBdev2 00:18:43.653 [2024-11-05 11:34:42.683237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:43.653 [2024-11-05 11:34:42.683261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:43.653 [2024-11-05 11:34:42.683412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.653 [ 00:18:43.653 { 00:18:43.653 "name": "BaseBdev2", 00:18:43.653 "aliases": [ 00:18:43.653 "e4ba1298-3793-4be3-96a7-a9b23349922d" 00:18:43.653 ], 00:18:43.653 "product_name": "Malloc disk", 00:18:43.653 "block_size": 4096, 00:18:43.653 "num_blocks": 8192, 00:18:43.653 "uuid": "e4ba1298-3793-4be3-96a7-a9b23349922d", 00:18:43.653 "assigned_rate_limits": { 00:18:43.653 "rw_ios_per_sec": 0, 00:18:43.653 "rw_mbytes_per_sec": 0, 00:18:43.653 "r_mbytes_per_sec": 0, 00:18:43.653 "w_mbytes_per_sec": 0 00:18:43.653 }, 00:18:43.653 "claimed": true, 00:18:43.653 "claim_type": "exclusive_write", 00:18:43.653 "zoned": false, 00:18:43.653 "supported_io_types": { 00:18:43.653 "read": true, 00:18:43.653 "write": true, 00:18:43.653 "unmap": true, 00:18:43.653 "flush": true, 00:18:43.653 "reset": true, 00:18:43.653 "nvme_admin": false, 00:18:43.653 "nvme_io": false, 00:18:43.653 "nvme_io_md": false, 00:18:43.653 "write_zeroes": true, 00:18:43.653 "zcopy": true, 00:18:43.653 "get_zone_info": false, 00:18:43.653 "zone_management": false, 00:18:43.653 "zone_append": false, 00:18:43.653 "compare": false, 00:18:43.653 "compare_and_write": false, 00:18:43.653 "abort": true, 00:18:43.653 "seek_hole": false, 00:18:43.653 "seek_data": false, 00:18:43.653 "copy": true, 00:18:43.653 "nvme_iov_md": false 00:18:43.653 }, 00:18:43.653 "memory_domains": [ 00:18:43.653 { 00:18:43.653 "dma_device_id": "system", 00:18:43.653 "dma_device_type": 1 00:18:43.653 }, 00:18:43.653 { 00:18:43.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.653 "dma_device_type": 2 00:18:43.653 } 00:18:43.653 ], 00:18:43.653 "driver_specific": {} 00:18:43.653 } 00:18:43.653 ] 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.653 "name": "Existed_Raid", 00:18:43.653 "uuid": "6126cf32-7204-4799-86a7-4510bb6f35cc", 00:18:43.653 "strip_size_kb": 0, 00:18:43.653 "state": "online", 00:18:43.653 "raid_level": "raid1", 00:18:43.653 "superblock": true, 00:18:43.653 "num_base_bdevs": 2, 00:18:43.653 "num_base_bdevs_discovered": 2, 00:18:43.653 "num_base_bdevs_operational": 2, 00:18:43.653 "base_bdevs_list": [ 00:18:43.653 { 00:18:43.653 "name": "BaseBdev1", 00:18:43.653 "uuid": "1a3b31fd-ac29-44d5-adf2-ad6888759b4e", 00:18:43.653 "is_configured": true, 00:18:43.653 "data_offset": 256, 00:18:43.653 "data_size": 7936 00:18:43.653 }, 00:18:43.653 { 00:18:43.653 "name": "BaseBdev2", 00:18:43.653 "uuid": "e4ba1298-3793-4be3-96a7-a9b23349922d", 00:18:43.653 "is_configured": true, 00:18:43.653 "data_offset": 256, 00:18:43.653 "data_size": 7936 00:18:43.653 } 00:18:43.653 ] 00:18:43.653 }' 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.653 11:34:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.913 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:43.913 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:43.913 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:43.913 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:43.913 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:43.913 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:43.913 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:43.913 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:43.913 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.913 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.913 [2024-11-05 11:34:43.141860] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.913 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.913 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:43.913 "name": "Existed_Raid", 00:18:43.913 "aliases": [ 00:18:43.913 "6126cf32-7204-4799-86a7-4510bb6f35cc" 00:18:43.913 ], 00:18:43.913 "product_name": "Raid Volume", 00:18:43.913 "block_size": 4096, 00:18:43.913 "num_blocks": 7936, 00:18:43.913 "uuid": "6126cf32-7204-4799-86a7-4510bb6f35cc", 00:18:43.913 "assigned_rate_limits": { 00:18:43.913 "rw_ios_per_sec": 0, 00:18:43.913 "rw_mbytes_per_sec": 0, 00:18:43.913 "r_mbytes_per_sec": 0, 00:18:43.913 "w_mbytes_per_sec": 0 00:18:43.913 }, 00:18:43.913 "claimed": false, 00:18:43.913 "zoned": false, 00:18:43.913 "supported_io_types": { 00:18:43.913 "read": true, 00:18:43.913 "write": true, 00:18:43.913 "unmap": false, 00:18:43.913 "flush": false, 00:18:43.913 "reset": true, 00:18:43.913 "nvme_admin": false, 00:18:43.913 "nvme_io": false, 00:18:43.913 "nvme_io_md": false, 00:18:43.913 "write_zeroes": true, 00:18:43.913 "zcopy": false, 00:18:43.913 "get_zone_info": false, 00:18:43.913 "zone_management": false, 00:18:43.913 "zone_append": false, 00:18:43.913 "compare": false, 00:18:43.913 "compare_and_write": false, 00:18:43.913 "abort": false, 00:18:43.913 "seek_hole": false, 00:18:43.913 "seek_data": false, 00:18:43.913 "copy": false, 00:18:43.913 "nvme_iov_md": false 00:18:43.913 }, 00:18:43.913 "memory_domains": [ 00:18:43.913 { 00:18:43.913 "dma_device_id": "system", 00:18:43.913 "dma_device_type": 1 00:18:43.913 }, 00:18:43.913 { 00:18:43.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.913 "dma_device_type": 2 00:18:43.913 }, 00:18:43.913 { 00:18:43.913 "dma_device_id": "system", 00:18:43.913 "dma_device_type": 1 00:18:43.913 }, 00:18:43.913 { 00:18:43.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.913 "dma_device_type": 2 00:18:43.913 } 00:18:43.913 ], 00:18:43.913 "driver_specific": { 00:18:43.913 "raid": { 00:18:43.913 "uuid": "6126cf32-7204-4799-86a7-4510bb6f35cc", 00:18:43.913 "strip_size_kb": 0, 00:18:43.913 "state": "online", 00:18:43.913 "raid_level": "raid1", 00:18:43.913 "superblock": true, 00:18:43.913 "num_base_bdevs": 2, 00:18:43.913 "num_base_bdevs_discovered": 2, 00:18:43.913 "num_base_bdevs_operational": 2, 00:18:43.913 "base_bdevs_list": [ 00:18:43.913 { 00:18:43.913 "name": "BaseBdev1", 00:18:43.913 "uuid": "1a3b31fd-ac29-44d5-adf2-ad6888759b4e", 00:18:43.913 "is_configured": true, 00:18:43.913 "data_offset": 256, 00:18:43.913 "data_size": 7936 00:18:43.913 }, 00:18:43.913 { 00:18:43.913 "name": "BaseBdev2", 00:18:43.913 "uuid": "e4ba1298-3793-4be3-96a7-a9b23349922d", 00:18:43.913 "is_configured": true, 00:18:43.913 "data_offset": 256, 00:18:43.914 "data_size": 7936 00:18:43.914 } 00:18:43.914 ] 00:18:43.914 } 00:18:43.914 } 00:18:43.914 }' 00:18:43.914 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:44.173 BaseBdev2' 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.173 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.173 [2024-11-05 11:34:43.381266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:44.482 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.482 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:44.482 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:44.482 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.483 "name": "Existed_Raid", 00:18:44.483 "uuid": "6126cf32-7204-4799-86a7-4510bb6f35cc", 00:18:44.483 "strip_size_kb": 0, 00:18:44.483 "state": "online", 00:18:44.483 "raid_level": "raid1", 00:18:44.483 "superblock": true, 00:18:44.483 "num_base_bdevs": 2, 00:18:44.483 "num_base_bdevs_discovered": 1, 00:18:44.483 "num_base_bdevs_operational": 1, 00:18:44.483 "base_bdevs_list": [ 00:18:44.483 { 00:18:44.483 "name": null, 00:18:44.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.483 "is_configured": false, 00:18:44.483 "data_offset": 0, 00:18:44.483 "data_size": 7936 00:18:44.483 }, 00:18:44.483 { 00:18:44.483 "name": "BaseBdev2", 00:18:44.483 "uuid": "e4ba1298-3793-4be3-96a7-a9b23349922d", 00:18:44.483 "is_configured": true, 00:18:44.483 "data_offset": 256, 00:18:44.483 "data_size": 7936 00:18:44.483 } 00:18:44.483 ] 00:18:44.483 }' 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.483 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.743 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:44.743 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:44.743 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.743 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.743 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.743 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:44.743 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.743 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:44.743 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:44.743 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:44.743 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.743 11:34:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.743 [2024-11-05 11:34:43.933308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:44.743 [2024-11-05 11:34:43.933406] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.003 [2024-11-05 11:34:44.023474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.003 [2024-11-05 11:34:44.023527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.003 [2024-11-05 11:34:44.023538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85934 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 85934 ']' 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 85934 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85934 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:45.003 killing process with pid 85934 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85934' 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 85934 00:18:45.003 [2024-11-05 11:34:44.112442] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:45.003 11:34:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 85934 00:18:45.003 [2024-11-05 11:34:44.128379] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:45.943 11:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:45.943 00:18:45.943 real 0m4.976s 00:18:45.943 user 0m7.231s 00:18:45.943 sys 0m0.891s 00:18:45.943 11:34:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:45.943 11:34:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:45.943 ************************************ 00:18:45.943 END TEST raid_state_function_test_sb_4k 00:18:45.943 ************************************ 00:18:46.203 11:34:45 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:46.203 11:34:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:46.203 11:34:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:46.203 11:34:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.203 ************************************ 00:18:46.203 START TEST raid_superblock_test_4k 00:18:46.203 ************************************ 00:18:46.203 11:34:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:18:46.203 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:46.203 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:46.203 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:46.203 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:46.203 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:46.203 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:46.203 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:46.203 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:46.203 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:46.203 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:46.203 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:46.204 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:46.204 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:46.204 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:46.204 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:46.204 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86186 00:18:46.204 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86186 00:18:46.204 11:34:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:46.204 11:34:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 86186 ']' 00:18:46.204 11:34:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.204 11:34:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:46.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.204 11:34:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.204 11:34:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:46.204 11:34:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:46.204 [2024-11-05 11:34:45.328243] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:18:46.204 [2024-11-05 11:34:45.328366] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86186 ] 00:18:46.464 [2024-11-05 11:34:45.502229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.464 [2024-11-05 11:34:45.602574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.723 [2024-11-05 11:34:45.791147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.723 [2024-11-05 11:34:45.791207] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:46.984 malloc1 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:46.984 [2024-11-05 11:34:46.189606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:46.984 [2024-11-05 11:34:46.189679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.984 [2024-11-05 11:34:46.189700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:46.984 [2024-11-05 11:34:46.189709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.984 [2024-11-05 11:34:46.191702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.984 [2024-11-05 11:34:46.191737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:46.984 pt1 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:46.984 malloc2 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:46.984 [2024-11-05 11:34:46.241411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:46.984 [2024-11-05 11:34:46.241462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.984 [2024-11-05 11:34:46.241481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:46.984 [2024-11-05 11:34:46.241491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.984 [2024-11-05 11:34:46.243541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.984 [2024-11-05 11:34:46.243575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:46.984 pt2 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:46.984 [2024-11-05 11:34:46.253461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:46.984 [2024-11-05 11:34:46.255325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:46.984 [2024-11-05 11:34:46.255483] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:46.984 [2024-11-05 11:34:46.255500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:46.984 [2024-11-05 11:34:46.255713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:46.984 [2024-11-05 11:34:46.255852] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:46.984 [2024-11-05 11:34:46.255866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:46.984 [2024-11-05 11:34:46.255996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.984 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.244 "name": "raid_bdev1", 00:18:47.244 "uuid": "14fbc359-17bf-4696-9c34-bed20af9ee11", 00:18:47.244 "strip_size_kb": 0, 00:18:47.244 "state": "online", 00:18:47.244 "raid_level": "raid1", 00:18:47.244 "superblock": true, 00:18:47.244 "num_base_bdevs": 2, 00:18:47.244 "num_base_bdevs_discovered": 2, 00:18:47.244 "num_base_bdevs_operational": 2, 00:18:47.244 "base_bdevs_list": [ 00:18:47.244 { 00:18:47.244 "name": "pt1", 00:18:47.244 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:47.244 "is_configured": true, 00:18:47.244 "data_offset": 256, 00:18:47.244 "data_size": 7936 00:18:47.244 }, 00:18:47.244 { 00:18:47.244 "name": "pt2", 00:18:47.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:47.244 "is_configured": true, 00:18:47.244 "data_offset": 256, 00:18:47.244 "data_size": 7936 00:18:47.244 } 00:18:47.244 ] 00:18:47.244 }' 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.244 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.505 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:47.505 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:47.505 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:47.505 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:47.505 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:47.505 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:47.505 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:47.505 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:47.505 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.505 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.505 [2024-11-05 11:34:46.712834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.505 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.505 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:47.505 "name": "raid_bdev1", 00:18:47.505 "aliases": [ 00:18:47.505 "14fbc359-17bf-4696-9c34-bed20af9ee11" 00:18:47.505 ], 00:18:47.505 "product_name": "Raid Volume", 00:18:47.505 "block_size": 4096, 00:18:47.505 "num_blocks": 7936, 00:18:47.505 "uuid": "14fbc359-17bf-4696-9c34-bed20af9ee11", 00:18:47.505 "assigned_rate_limits": { 00:18:47.505 "rw_ios_per_sec": 0, 00:18:47.505 "rw_mbytes_per_sec": 0, 00:18:47.505 "r_mbytes_per_sec": 0, 00:18:47.505 "w_mbytes_per_sec": 0 00:18:47.505 }, 00:18:47.505 "claimed": false, 00:18:47.505 "zoned": false, 00:18:47.505 "supported_io_types": { 00:18:47.505 "read": true, 00:18:47.505 "write": true, 00:18:47.505 "unmap": false, 00:18:47.505 "flush": false, 00:18:47.505 "reset": true, 00:18:47.505 "nvme_admin": false, 00:18:47.505 "nvme_io": false, 00:18:47.505 "nvme_io_md": false, 00:18:47.505 "write_zeroes": true, 00:18:47.505 "zcopy": false, 00:18:47.505 "get_zone_info": false, 00:18:47.505 "zone_management": false, 00:18:47.505 "zone_append": false, 00:18:47.505 "compare": false, 00:18:47.505 "compare_and_write": false, 00:18:47.505 "abort": false, 00:18:47.505 "seek_hole": false, 00:18:47.505 "seek_data": false, 00:18:47.505 "copy": false, 00:18:47.505 "nvme_iov_md": false 00:18:47.505 }, 00:18:47.505 "memory_domains": [ 00:18:47.505 { 00:18:47.505 "dma_device_id": "system", 00:18:47.505 "dma_device_type": 1 00:18:47.505 }, 00:18:47.505 { 00:18:47.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.505 "dma_device_type": 2 00:18:47.505 }, 00:18:47.505 { 00:18:47.505 "dma_device_id": "system", 00:18:47.505 "dma_device_type": 1 00:18:47.505 }, 00:18:47.505 { 00:18:47.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.505 "dma_device_type": 2 00:18:47.505 } 00:18:47.505 ], 00:18:47.505 "driver_specific": { 00:18:47.505 "raid": { 00:18:47.505 "uuid": "14fbc359-17bf-4696-9c34-bed20af9ee11", 00:18:47.505 "strip_size_kb": 0, 00:18:47.505 "state": "online", 00:18:47.505 "raid_level": "raid1", 00:18:47.505 "superblock": true, 00:18:47.505 "num_base_bdevs": 2, 00:18:47.505 "num_base_bdevs_discovered": 2, 00:18:47.505 "num_base_bdevs_operational": 2, 00:18:47.505 "base_bdevs_list": [ 00:18:47.505 { 00:18:47.505 "name": "pt1", 00:18:47.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:47.505 "is_configured": true, 00:18:47.505 "data_offset": 256, 00:18:47.505 "data_size": 7936 00:18:47.505 }, 00:18:47.505 { 00:18:47.505 "name": "pt2", 00:18:47.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:47.505 "is_configured": true, 00:18:47.505 "data_offset": 256, 00:18:47.505 "data_size": 7936 00:18:47.505 } 00:18:47.505 ] 00:18:47.505 } 00:18:47.505 } 00:18:47.505 }' 00:18:47.505 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:47.765 pt2' 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:47.765 [2024-11-05 11:34:46.936468] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=14fbc359-17bf-4696-9c34-bed20af9ee11 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 14fbc359-17bf-4696-9c34-bed20af9ee11 ']' 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.765 [2024-11-05 11:34:46.984131] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:47.765 [2024-11-05 11:34:46.984213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:47.765 [2024-11-05 11:34:46.984322] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.765 [2024-11-05 11:34:46.984398] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.765 [2024-11-05 11:34:46.984444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:47.765 11:34:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.765 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:47.765 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:47.765 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:47.766 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:47.766 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.766 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.026 [2024-11-05 11:34:47.123909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:48.026 [2024-11-05 11:34:47.125709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:48.026 [2024-11-05 11:34:47.125833] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:48.026 [2024-11-05 11:34:47.125915] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:48.026 [2024-11-05 11:34:47.125954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:48.026 [2024-11-05 11:34:47.125975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:48.026 request: 00:18:48.026 { 00:18:48.026 "name": "raid_bdev1", 00:18:48.026 "raid_level": "raid1", 00:18:48.026 "base_bdevs": [ 00:18:48.026 "malloc1", 00:18:48.026 "malloc2" 00:18:48.026 ], 00:18:48.026 "superblock": false, 00:18:48.026 "method": "bdev_raid_create", 00:18:48.026 "req_id": 1 00:18:48.026 } 00:18:48.026 Got JSON-RPC error response 00:18:48.026 response: 00:18:48.026 { 00:18:48.026 "code": -17, 00:18:48.026 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:48.026 } 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.026 [2024-11-05 11:34:47.187785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:48.026 [2024-11-05 11:34:47.187887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.026 [2024-11-05 11:34:47.187917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:48.026 [2024-11-05 11:34:47.187946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.026 [2024-11-05 11:34:47.189944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.026 [2024-11-05 11:34:47.190015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:48.026 [2024-11-05 11:34:47.190097] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:48.026 [2024-11-05 11:34:47.190193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:48.026 pt1 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.026 "name": "raid_bdev1", 00:18:48.026 "uuid": "14fbc359-17bf-4696-9c34-bed20af9ee11", 00:18:48.026 "strip_size_kb": 0, 00:18:48.026 "state": "configuring", 00:18:48.026 "raid_level": "raid1", 00:18:48.026 "superblock": true, 00:18:48.026 "num_base_bdevs": 2, 00:18:48.026 "num_base_bdevs_discovered": 1, 00:18:48.026 "num_base_bdevs_operational": 2, 00:18:48.026 "base_bdevs_list": [ 00:18:48.026 { 00:18:48.026 "name": "pt1", 00:18:48.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:48.026 "is_configured": true, 00:18:48.026 "data_offset": 256, 00:18:48.026 "data_size": 7936 00:18:48.026 }, 00:18:48.026 { 00:18:48.026 "name": null, 00:18:48.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:48.026 "is_configured": false, 00:18:48.026 "data_offset": 256, 00:18:48.026 "data_size": 7936 00:18:48.026 } 00:18:48.026 ] 00:18:48.026 }' 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.026 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.596 [2024-11-05 11:34:47.643015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:48.596 [2024-11-05 11:34:47.643146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.596 [2024-11-05 11:34:47.643190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:48.596 [2024-11-05 11:34:47.643221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.596 [2024-11-05 11:34:47.643644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.596 [2024-11-05 11:34:47.643710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:48.596 [2024-11-05 11:34:47.643804] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:48.596 [2024-11-05 11:34:47.643853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:48.596 [2024-11-05 11:34:47.643985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:48.596 [2024-11-05 11:34:47.644025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:48.596 [2024-11-05 11:34:47.644279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:48.596 [2024-11-05 11:34:47.644477] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:48.596 [2024-11-05 11:34:47.644519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:48.596 [2024-11-05 11:34:47.644685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.596 pt2 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.596 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.596 "name": "raid_bdev1", 00:18:48.596 "uuid": "14fbc359-17bf-4696-9c34-bed20af9ee11", 00:18:48.596 "strip_size_kb": 0, 00:18:48.596 "state": "online", 00:18:48.596 "raid_level": "raid1", 00:18:48.596 "superblock": true, 00:18:48.596 "num_base_bdevs": 2, 00:18:48.596 "num_base_bdevs_discovered": 2, 00:18:48.596 "num_base_bdevs_operational": 2, 00:18:48.596 "base_bdevs_list": [ 00:18:48.596 { 00:18:48.596 "name": "pt1", 00:18:48.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:48.596 "is_configured": true, 00:18:48.596 "data_offset": 256, 00:18:48.596 "data_size": 7936 00:18:48.596 }, 00:18:48.596 { 00:18:48.597 "name": "pt2", 00:18:48.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:48.597 "is_configured": true, 00:18:48.597 "data_offset": 256, 00:18:48.597 "data_size": 7936 00:18:48.597 } 00:18:48.597 ] 00:18:48.597 }' 00:18:48.597 11:34:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.597 11:34:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.856 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:48.856 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:48.856 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:48.856 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:48.856 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:48.856 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:48.856 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:48.857 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.857 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.857 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:49.117 [2024-11-05 11:34:48.130399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:49.117 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.117 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:49.117 "name": "raid_bdev1", 00:18:49.117 "aliases": [ 00:18:49.117 "14fbc359-17bf-4696-9c34-bed20af9ee11" 00:18:49.117 ], 00:18:49.117 "product_name": "Raid Volume", 00:18:49.117 "block_size": 4096, 00:18:49.117 "num_blocks": 7936, 00:18:49.117 "uuid": "14fbc359-17bf-4696-9c34-bed20af9ee11", 00:18:49.117 "assigned_rate_limits": { 00:18:49.117 "rw_ios_per_sec": 0, 00:18:49.117 "rw_mbytes_per_sec": 0, 00:18:49.117 "r_mbytes_per_sec": 0, 00:18:49.117 "w_mbytes_per_sec": 0 00:18:49.117 }, 00:18:49.117 "claimed": false, 00:18:49.117 "zoned": false, 00:18:49.117 "supported_io_types": { 00:18:49.117 "read": true, 00:18:49.117 "write": true, 00:18:49.117 "unmap": false, 00:18:49.117 "flush": false, 00:18:49.117 "reset": true, 00:18:49.117 "nvme_admin": false, 00:18:49.117 "nvme_io": false, 00:18:49.117 "nvme_io_md": false, 00:18:49.117 "write_zeroes": true, 00:18:49.117 "zcopy": false, 00:18:49.117 "get_zone_info": false, 00:18:49.117 "zone_management": false, 00:18:49.117 "zone_append": false, 00:18:49.117 "compare": false, 00:18:49.117 "compare_and_write": false, 00:18:49.117 "abort": false, 00:18:49.117 "seek_hole": false, 00:18:49.117 "seek_data": false, 00:18:49.117 "copy": false, 00:18:49.117 "nvme_iov_md": false 00:18:49.117 }, 00:18:49.117 "memory_domains": [ 00:18:49.117 { 00:18:49.118 "dma_device_id": "system", 00:18:49.118 "dma_device_type": 1 00:18:49.118 }, 00:18:49.118 { 00:18:49.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.118 "dma_device_type": 2 00:18:49.118 }, 00:18:49.118 { 00:18:49.118 "dma_device_id": "system", 00:18:49.118 "dma_device_type": 1 00:18:49.118 }, 00:18:49.118 { 00:18:49.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.118 "dma_device_type": 2 00:18:49.118 } 00:18:49.118 ], 00:18:49.118 "driver_specific": { 00:18:49.118 "raid": { 00:18:49.118 "uuid": "14fbc359-17bf-4696-9c34-bed20af9ee11", 00:18:49.118 "strip_size_kb": 0, 00:18:49.118 "state": "online", 00:18:49.118 "raid_level": "raid1", 00:18:49.118 "superblock": true, 00:18:49.118 "num_base_bdevs": 2, 00:18:49.118 "num_base_bdevs_discovered": 2, 00:18:49.118 "num_base_bdevs_operational": 2, 00:18:49.118 "base_bdevs_list": [ 00:18:49.118 { 00:18:49.118 "name": "pt1", 00:18:49.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:49.118 "is_configured": true, 00:18:49.118 "data_offset": 256, 00:18:49.118 "data_size": 7936 00:18:49.118 }, 00:18:49.118 { 00:18:49.118 "name": "pt2", 00:18:49.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:49.118 "is_configured": true, 00:18:49.118 "data_offset": 256, 00:18:49.118 "data_size": 7936 00:18:49.118 } 00:18:49.118 ] 00:18:49.118 } 00:18:49.118 } 00:18:49.118 }' 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:49.118 pt2' 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.118 [2024-11-05 11:34:48.361978] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:49.118 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 14fbc359-17bf-4696-9c34-bed20af9ee11 '!=' 14fbc359-17bf-4696-9c34-bed20af9ee11 ']' 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.378 [2024-11-05 11:34:48.409715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.378 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.378 "name": "raid_bdev1", 00:18:49.378 "uuid": "14fbc359-17bf-4696-9c34-bed20af9ee11", 00:18:49.378 "strip_size_kb": 0, 00:18:49.378 "state": "online", 00:18:49.378 "raid_level": "raid1", 00:18:49.378 "superblock": true, 00:18:49.378 "num_base_bdevs": 2, 00:18:49.378 "num_base_bdevs_discovered": 1, 00:18:49.378 "num_base_bdevs_operational": 1, 00:18:49.378 "base_bdevs_list": [ 00:18:49.378 { 00:18:49.378 "name": null, 00:18:49.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.379 "is_configured": false, 00:18:49.379 "data_offset": 0, 00:18:49.379 "data_size": 7936 00:18:49.379 }, 00:18:49.379 { 00:18:49.379 "name": "pt2", 00:18:49.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:49.379 "is_configured": true, 00:18:49.379 "data_offset": 256, 00:18:49.379 "data_size": 7936 00:18:49.379 } 00:18:49.379 ] 00:18:49.379 }' 00:18:49.379 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.379 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.638 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:49.638 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.638 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.638 [2024-11-05 11:34:48.860907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:49.638 [2024-11-05 11:34:48.860933] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:49.638 [2024-11-05 11:34:48.860982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.638 [2024-11-05 11:34:48.861019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:49.638 [2024-11-05 11:34:48.861029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:49.638 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.638 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.638 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.638 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:49.638 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.638 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.898 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:49.898 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:49.898 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:49.898 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:49.898 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:49.898 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.898 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.898 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.898 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.899 [2024-11-05 11:34:48.936799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:49.899 [2024-11-05 11:34:48.936854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.899 [2024-11-05 11:34:48.936869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:49.899 [2024-11-05 11:34:48.936879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.899 [2024-11-05 11:34:48.939085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.899 [2024-11-05 11:34:48.939124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:49.899 [2024-11-05 11:34:48.939216] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:49.899 [2024-11-05 11:34:48.939283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:49.899 [2024-11-05 11:34:48.939395] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:49.899 [2024-11-05 11:34:48.939422] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:49.899 [2024-11-05 11:34:48.939631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:49.899 [2024-11-05 11:34:48.939784] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:49.899 [2024-11-05 11:34:48.939801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:49.899 [2024-11-05 11:34:48.939929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.899 pt2 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.899 "name": "raid_bdev1", 00:18:49.899 "uuid": "14fbc359-17bf-4696-9c34-bed20af9ee11", 00:18:49.899 "strip_size_kb": 0, 00:18:49.899 "state": "online", 00:18:49.899 "raid_level": "raid1", 00:18:49.899 "superblock": true, 00:18:49.899 "num_base_bdevs": 2, 00:18:49.899 "num_base_bdevs_discovered": 1, 00:18:49.899 "num_base_bdevs_operational": 1, 00:18:49.899 "base_bdevs_list": [ 00:18:49.899 { 00:18:49.899 "name": null, 00:18:49.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.899 "is_configured": false, 00:18:49.899 "data_offset": 256, 00:18:49.899 "data_size": 7936 00:18:49.899 }, 00:18:49.899 { 00:18:49.899 "name": "pt2", 00:18:49.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:49.899 "is_configured": true, 00:18:49.899 "data_offset": 256, 00:18:49.899 "data_size": 7936 00:18:49.899 } 00:18:49.899 ] 00:18:49.899 }' 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.899 11:34:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.159 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:50.159 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.159 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.159 [2024-11-05 11:34:49.399968] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:50.159 [2024-11-05 11:34:49.399996] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:50.159 [2024-11-05 11:34:49.400049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:50.159 [2024-11-05 11:34:49.400088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:50.159 [2024-11-05 11:34:49.400096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:50.159 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.159 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.159 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.159 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.159 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:50.159 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.419 [2024-11-05 11:34:49.459885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:50.419 [2024-11-05 11:34:49.459935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.419 [2024-11-05 11:34:49.459952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:50.419 [2024-11-05 11:34:49.459960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.419 [2024-11-05 11:34:49.462081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.419 [2024-11-05 11:34:49.462116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:50.419 [2024-11-05 11:34:49.462196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:50.419 [2024-11-05 11:34:49.462236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:50.419 [2024-11-05 11:34:49.462381] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:50.419 [2024-11-05 11:34:49.462398] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:50.419 [2024-11-05 11:34:49.462412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:50.419 [2024-11-05 11:34:49.462488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:50.419 [2024-11-05 11:34:49.462575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:50.419 [2024-11-05 11:34:49.462588] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:50.419 [2024-11-05 11:34:49.462813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:50.419 [2024-11-05 11:34:49.462951] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:50.419 [2024-11-05 11:34:49.462971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:50.419 [2024-11-05 11:34:49.463108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.419 pt1 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.419 "name": "raid_bdev1", 00:18:50.419 "uuid": "14fbc359-17bf-4696-9c34-bed20af9ee11", 00:18:50.419 "strip_size_kb": 0, 00:18:50.419 "state": "online", 00:18:50.419 "raid_level": "raid1", 00:18:50.419 "superblock": true, 00:18:50.419 "num_base_bdevs": 2, 00:18:50.419 "num_base_bdevs_discovered": 1, 00:18:50.419 "num_base_bdevs_operational": 1, 00:18:50.419 "base_bdevs_list": [ 00:18:50.419 { 00:18:50.419 "name": null, 00:18:50.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.419 "is_configured": false, 00:18:50.419 "data_offset": 256, 00:18:50.419 "data_size": 7936 00:18:50.419 }, 00:18:50.419 { 00:18:50.419 "name": "pt2", 00:18:50.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:50.419 "is_configured": true, 00:18:50.419 "data_offset": 256, 00:18:50.419 "data_size": 7936 00:18:50.419 } 00:18:50.419 ] 00:18:50.419 }' 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.419 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.678 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:50.678 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.678 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.678 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:50.678 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.678 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:50.678 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:50.678 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:50.678 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.678 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.938 [2024-11-05 11:34:49.955304] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.938 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.938 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 14fbc359-17bf-4696-9c34-bed20af9ee11 '!=' 14fbc359-17bf-4696-9c34-bed20af9ee11 ']' 00:18:50.938 11:34:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86186 00:18:50.938 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 86186 ']' 00:18:50.938 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 86186 00:18:50.938 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:18:50.938 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:50.938 11:34:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86186 00:18:50.938 killing process with pid 86186 00:18:50.938 11:34:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:50.938 11:34:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:50.938 11:34:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86186' 00:18:50.938 11:34:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 86186 00:18:50.938 [2024-11-05 11:34:50.024240] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:50.938 [2024-11-05 11:34:50.024309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:50.938 [2024-11-05 11:34:50.024345] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:50.938 [2024-11-05 11:34:50.024357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:50.938 11:34:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 86186 00:18:51.197 [2024-11-05 11:34:50.217478] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:52.137 ************************************ 00:18:52.137 END TEST raid_superblock_test_4k 00:18:52.137 ************************************ 00:18:52.137 11:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:52.137 00:18:52.137 real 0m6.016s 00:18:52.137 user 0m9.191s 00:18:52.137 sys 0m1.109s 00:18:52.137 11:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:52.137 11:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.137 11:34:51 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:52.137 11:34:51 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:52.137 11:34:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:52.137 11:34:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:52.137 11:34:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.137 ************************************ 00:18:52.137 START TEST raid_rebuild_test_sb_4k 00:18:52.137 ************************************ 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86509 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86509 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86509 ']' 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:52.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:52.137 11:34:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.397 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:52.397 Zero copy mechanism will not be used. 00:18:52.397 [2024-11-05 11:34:51.447899] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:18:52.397 [2024-11-05 11:34:51.448058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86509 ] 00:18:52.397 [2024-11-05 11:34:51.630741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.657 [2024-11-05 11:34:51.742491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.657 [2024-11-05 11:34:51.930971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:52.657 [2024-11-05 11:34:51.931028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.227 BaseBdev1_malloc 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.227 [2024-11-05 11:34:52.287016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:53.227 [2024-11-05 11:34:52.287091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.227 [2024-11-05 11:34:52.287111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:53.227 [2024-11-05 11:34:52.287122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.227 [2024-11-05 11:34:52.289117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.227 [2024-11-05 11:34:52.289163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:53.227 BaseBdev1 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.227 BaseBdev2_malloc 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.227 [2024-11-05 11:34:52.338563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:53.227 [2024-11-05 11:34:52.338621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.227 [2024-11-05 11:34:52.338637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:53.227 [2024-11-05 11:34:52.338649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.227 [2024-11-05 11:34:52.340635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.227 [2024-11-05 11:34:52.340669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:53.227 BaseBdev2 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.227 spare_malloc 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.227 spare_delay 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.227 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.227 [2024-11-05 11:34:52.437142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:53.227 [2024-11-05 11:34:52.437193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.227 [2024-11-05 11:34:52.437210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:53.227 [2024-11-05 11:34:52.437220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.227 [2024-11-05 11:34:52.439169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.227 [2024-11-05 11:34:52.439209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:53.228 spare 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.228 [2024-11-05 11:34:52.449172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:53.228 [2024-11-05 11:34:52.450869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:53.228 [2024-11-05 11:34:52.451042] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:53.228 [2024-11-05 11:34:52.451058] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:53.228 [2024-11-05 11:34:52.451334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:53.228 [2024-11-05 11:34:52.451505] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:53.228 [2024-11-05 11:34:52.451520] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:53.228 [2024-11-05 11:34:52.451655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.228 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.487 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.487 "name": "raid_bdev1", 00:18:53.487 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:18:53.487 "strip_size_kb": 0, 00:18:53.487 "state": "online", 00:18:53.487 "raid_level": "raid1", 00:18:53.487 "superblock": true, 00:18:53.487 "num_base_bdevs": 2, 00:18:53.487 "num_base_bdevs_discovered": 2, 00:18:53.487 "num_base_bdevs_operational": 2, 00:18:53.487 "base_bdevs_list": [ 00:18:53.487 { 00:18:53.487 "name": "BaseBdev1", 00:18:53.487 "uuid": "fdeab7ac-43c9-5bc2-a319-2860d8540db1", 00:18:53.487 "is_configured": true, 00:18:53.487 "data_offset": 256, 00:18:53.487 "data_size": 7936 00:18:53.487 }, 00:18:53.487 { 00:18:53.487 "name": "BaseBdev2", 00:18:53.487 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:18:53.487 "is_configured": true, 00:18:53.487 "data_offset": 256, 00:18:53.487 "data_size": 7936 00:18:53.487 } 00:18:53.487 ] 00:18:53.487 }' 00:18:53.487 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.487 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:53.747 [2024-11-05 11:34:52.888651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:53.747 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:53.748 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:53.748 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:53.748 11:34:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:54.008 [2024-11-05 11:34:53.132032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:54.008 /dev/nbd0 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:54.008 1+0 records in 00:18:54.008 1+0 records out 00:18:54.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331797 s, 12.3 MB/s 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:54.008 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:54.577 7936+0 records in 00:18:54.577 7936+0 records out 00:18:54.577 32505856 bytes (33 MB, 31 MiB) copied, 0.645621 s, 50.3 MB/s 00:18:54.577 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:54.577 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:54.577 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:54.577 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:54.577 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:54.577 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:54.577 11:34:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:54.837 [2024-11-05 11:34:54.050174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:54.837 [2024-11-05 11:34:54.069007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.837 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.838 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.838 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:54.838 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.106 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.106 "name": "raid_bdev1", 00:18:55.106 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:18:55.106 "strip_size_kb": 0, 00:18:55.106 "state": "online", 00:18:55.106 "raid_level": "raid1", 00:18:55.106 "superblock": true, 00:18:55.106 "num_base_bdevs": 2, 00:18:55.106 "num_base_bdevs_discovered": 1, 00:18:55.106 "num_base_bdevs_operational": 1, 00:18:55.106 "base_bdevs_list": [ 00:18:55.106 { 00:18:55.106 "name": null, 00:18:55.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.106 "is_configured": false, 00:18:55.106 "data_offset": 0, 00:18:55.106 "data_size": 7936 00:18:55.106 }, 00:18:55.106 { 00:18:55.106 "name": "BaseBdev2", 00:18:55.106 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:18:55.106 "is_configured": true, 00:18:55.106 "data_offset": 256, 00:18:55.106 "data_size": 7936 00:18:55.106 } 00:18:55.106 ] 00:18:55.106 }' 00:18:55.106 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.106 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.416 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:55.416 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.416 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.416 [2024-11-05 11:34:54.560219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:55.416 [2024-11-05 11:34:54.576878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:55.416 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.416 11:34:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:55.416 [2024-11-05 11:34:54.578719] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:56.355 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.355 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.355 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.355 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.355 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.355 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.355 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.355 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.355 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.355 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.615 "name": "raid_bdev1", 00:18:56.615 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:18:56.615 "strip_size_kb": 0, 00:18:56.615 "state": "online", 00:18:56.615 "raid_level": "raid1", 00:18:56.615 "superblock": true, 00:18:56.615 "num_base_bdevs": 2, 00:18:56.615 "num_base_bdevs_discovered": 2, 00:18:56.615 "num_base_bdevs_operational": 2, 00:18:56.615 "process": { 00:18:56.615 "type": "rebuild", 00:18:56.615 "target": "spare", 00:18:56.615 "progress": { 00:18:56.615 "blocks": 2560, 00:18:56.615 "percent": 32 00:18:56.615 } 00:18:56.615 }, 00:18:56.615 "base_bdevs_list": [ 00:18:56.615 { 00:18:56.615 "name": "spare", 00:18:56.615 "uuid": "7d449fc2-bb83-5aef-bc4e-5f6c50c37473", 00:18:56.615 "is_configured": true, 00:18:56.615 "data_offset": 256, 00:18:56.615 "data_size": 7936 00:18:56.615 }, 00:18:56.615 { 00:18:56.615 "name": "BaseBdev2", 00:18:56.615 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:18:56.615 "is_configured": true, 00:18:56.615 "data_offset": 256, 00:18:56.615 "data_size": 7936 00:18:56.615 } 00:18:56.615 ] 00:18:56.615 }' 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.615 [2024-11-05 11:34:55.718063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.615 [2024-11-05 11:34:55.783508] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:56.615 [2024-11-05 11:34:55.783568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.615 [2024-11-05 11:34:55.783581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.615 [2024-11-05 11:34:55.783591] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.615 "name": "raid_bdev1", 00:18:56.615 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:18:56.615 "strip_size_kb": 0, 00:18:56.615 "state": "online", 00:18:56.615 "raid_level": "raid1", 00:18:56.615 "superblock": true, 00:18:56.615 "num_base_bdevs": 2, 00:18:56.615 "num_base_bdevs_discovered": 1, 00:18:56.615 "num_base_bdevs_operational": 1, 00:18:56.615 "base_bdevs_list": [ 00:18:56.615 { 00:18:56.615 "name": null, 00:18:56.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.615 "is_configured": false, 00:18:56.615 "data_offset": 0, 00:18:56.615 "data_size": 7936 00:18:56.615 }, 00:18:56.615 { 00:18:56.615 "name": "BaseBdev2", 00:18:56.615 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:18:56.615 "is_configured": true, 00:18:56.615 "data_offset": 256, 00:18:56.615 "data_size": 7936 00:18:56.615 } 00:18:56.615 ] 00:18:56.615 }' 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.615 11:34:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.184 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:57.184 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.184 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:57.184 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:57.184 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.184 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.184 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.184 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.184 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.184 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.184 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.184 "name": "raid_bdev1", 00:18:57.184 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:18:57.184 "strip_size_kb": 0, 00:18:57.184 "state": "online", 00:18:57.184 "raid_level": "raid1", 00:18:57.184 "superblock": true, 00:18:57.185 "num_base_bdevs": 2, 00:18:57.185 "num_base_bdevs_discovered": 1, 00:18:57.185 "num_base_bdevs_operational": 1, 00:18:57.185 "base_bdevs_list": [ 00:18:57.185 { 00:18:57.185 "name": null, 00:18:57.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.185 "is_configured": false, 00:18:57.185 "data_offset": 0, 00:18:57.185 "data_size": 7936 00:18:57.185 }, 00:18:57.185 { 00:18:57.185 "name": "BaseBdev2", 00:18:57.185 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:18:57.185 "is_configured": true, 00:18:57.185 "data_offset": 256, 00:18:57.185 "data_size": 7936 00:18:57.185 } 00:18:57.185 ] 00:18:57.185 }' 00:18:57.185 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.185 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:57.185 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.185 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:57.185 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:57.185 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.185 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.185 [2024-11-05 11:34:56.392194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:57.185 [2024-11-05 11:34:56.407241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:57.185 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.185 11:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:57.185 [2024-11-05 11:34:56.409047] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:58.565 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.565 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.565 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.565 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.565 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.565 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.565 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.565 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.565 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.565 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.565 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.565 "name": "raid_bdev1", 00:18:58.565 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:18:58.565 "strip_size_kb": 0, 00:18:58.565 "state": "online", 00:18:58.565 "raid_level": "raid1", 00:18:58.565 "superblock": true, 00:18:58.565 "num_base_bdevs": 2, 00:18:58.565 "num_base_bdevs_discovered": 2, 00:18:58.565 "num_base_bdevs_operational": 2, 00:18:58.565 "process": { 00:18:58.565 "type": "rebuild", 00:18:58.565 "target": "spare", 00:18:58.565 "progress": { 00:18:58.565 "blocks": 2560, 00:18:58.565 "percent": 32 00:18:58.565 } 00:18:58.565 }, 00:18:58.565 "base_bdevs_list": [ 00:18:58.565 { 00:18:58.565 "name": "spare", 00:18:58.565 "uuid": "7d449fc2-bb83-5aef-bc4e-5f6c50c37473", 00:18:58.565 "is_configured": true, 00:18:58.565 "data_offset": 256, 00:18:58.565 "data_size": 7936 00:18:58.565 }, 00:18:58.565 { 00:18:58.565 "name": "BaseBdev2", 00:18:58.565 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:18:58.565 "is_configured": true, 00:18:58.565 "data_offset": 256, 00:18:58.565 "data_size": 7936 00:18:58.565 } 00:18:58.565 ] 00:18:58.566 }' 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:58.566 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=667 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.566 "name": "raid_bdev1", 00:18:58.566 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:18:58.566 "strip_size_kb": 0, 00:18:58.566 "state": "online", 00:18:58.566 "raid_level": "raid1", 00:18:58.566 "superblock": true, 00:18:58.566 "num_base_bdevs": 2, 00:18:58.566 "num_base_bdevs_discovered": 2, 00:18:58.566 "num_base_bdevs_operational": 2, 00:18:58.566 "process": { 00:18:58.566 "type": "rebuild", 00:18:58.566 "target": "spare", 00:18:58.566 "progress": { 00:18:58.566 "blocks": 2816, 00:18:58.566 "percent": 35 00:18:58.566 } 00:18:58.566 }, 00:18:58.566 "base_bdevs_list": [ 00:18:58.566 { 00:18:58.566 "name": "spare", 00:18:58.566 "uuid": "7d449fc2-bb83-5aef-bc4e-5f6c50c37473", 00:18:58.566 "is_configured": true, 00:18:58.566 "data_offset": 256, 00:18:58.566 "data_size": 7936 00:18:58.566 }, 00:18:58.566 { 00:18:58.566 "name": "BaseBdev2", 00:18:58.566 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:18:58.566 "is_configured": true, 00:18:58.566 "data_offset": 256, 00:18:58.566 "data_size": 7936 00:18:58.566 } 00:18:58.566 ] 00:18:58.566 }' 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:58.566 11:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:59.503 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:59.503 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.503 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.503 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.503 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.503 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.503 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.503 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.503 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.503 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.503 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.503 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.503 "name": "raid_bdev1", 00:18:59.503 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:18:59.503 "strip_size_kb": 0, 00:18:59.503 "state": "online", 00:18:59.503 "raid_level": "raid1", 00:18:59.504 "superblock": true, 00:18:59.504 "num_base_bdevs": 2, 00:18:59.504 "num_base_bdevs_discovered": 2, 00:18:59.504 "num_base_bdevs_operational": 2, 00:18:59.504 "process": { 00:18:59.504 "type": "rebuild", 00:18:59.504 "target": "spare", 00:18:59.504 "progress": { 00:18:59.504 "blocks": 5632, 00:18:59.504 "percent": 70 00:18:59.504 } 00:18:59.504 }, 00:18:59.504 "base_bdevs_list": [ 00:18:59.504 { 00:18:59.504 "name": "spare", 00:18:59.504 "uuid": "7d449fc2-bb83-5aef-bc4e-5f6c50c37473", 00:18:59.504 "is_configured": true, 00:18:59.504 "data_offset": 256, 00:18:59.504 "data_size": 7936 00:18:59.504 }, 00:18:59.504 { 00:18:59.504 "name": "BaseBdev2", 00:18:59.504 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:18:59.504 "is_configured": true, 00:18:59.504 "data_offset": 256, 00:18:59.504 "data_size": 7936 00:18:59.504 } 00:18:59.504 ] 00:18:59.504 }' 00:18:59.504 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.504 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.504 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.763 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.763 11:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:00.331 [2024-11-05 11:34:59.520810] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:00.331 [2024-11-05 11:34:59.520875] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:00.331 [2024-11-05 11:34:59.520960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.590 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:00.590 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:00.590 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.590 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:00.590 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:00.590 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.590 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.590 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.590 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.590 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.590 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.590 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.590 "name": "raid_bdev1", 00:19:00.590 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:19:00.590 "strip_size_kb": 0, 00:19:00.590 "state": "online", 00:19:00.590 "raid_level": "raid1", 00:19:00.590 "superblock": true, 00:19:00.590 "num_base_bdevs": 2, 00:19:00.590 "num_base_bdevs_discovered": 2, 00:19:00.590 "num_base_bdevs_operational": 2, 00:19:00.590 "base_bdevs_list": [ 00:19:00.590 { 00:19:00.590 "name": "spare", 00:19:00.590 "uuid": "7d449fc2-bb83-5aef-bc4e-5f6c50c37473", 00:19:00.590 "is_configured": true, 00:19:00.590 "data_offset": 256, 00:19:00.590 "data_size": 7936 00:19:00.590 }, 00:19:00.590 { 00:19:00.590 "name": "BaseBdev2", 00:19:00.590 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:19:00.590 "is_configured": true, 00:19:00.590 "data_offset": 256, 00:19:00.590 "data_size": 7936 00:19:00.590 } 00:19:00.590 ] 00:19:00.590 }' 00:19:00.590 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.849 "name": "raid_bdev1", 00:19:00.849 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:19:00.849 "strip_size_kb": 0, 00:19:00.849 "state": "online", 00:19:00.849 "raid_level": "raid1", 00:19:00.849 "superblock": true, 00:19:00.849 "num_base_bdevs": 2, 00:19:00.849 "num_base_bdevs_discovered": 2, 00:19:00.849 "num_base_bdevs_operational": 2, 00:19:00.849 "base_bdevs_list": [ 00:19:00.849 { 00:19:00.849 "name": "spare", 00:19:00.849 "uuid": "7d449fc2-bb83-5aef-bc4e-5f6c50c37473", 00:19:00.849 "is_configured": true, 00:19:00.849 "data_offset": 256, 00:19:00.849 "data_size": 7936 00:19:00.849 }, 00:19:00.849 { 00:19:00.849 "name": "BaseBdev2", 00:19:00.849 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:19:00.849 "is_configured": true, 00:19:00.849 "data_offset": 256, 00:19:00.849 "data_size": 7936 00:19:00.849 } 00:19:00.849 ] 00:19:00.849 }' 00:19:00.849 11:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.849 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.849 "name": "raid_bdev1", 00:19:00.849 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:19:00.849 "strip_size_kb": 0, 00:19:00.849 "state": "online", 00:19:00.849 "raid_level": "raid1", 00:19:00.849 "superblock": true, 00:19:00.849 "num_base_bdevs": 2, 00:19:00.849 "num_base_bdevs_discovered": 2, 00:19:00.849 "num_base_bdevs_operational": 2, 00:19:00.849 "base_bdevs_list": [ 00:19:00.849 { 00:19:00.849 "name": "spare", 00:19:00.849 "uuid": "7d449fc2-bb83-5aef-bc4e-5f6c50c37473", 00:19:00.849 "is_configured": true, 00:19:00.849 "data_offset": 256, 00:19:00.849 "data_size": 7936 00:19:00.849 }, 00:19:00.850 { 00:19:00.850 "name": "BaseBdev2", 00:19:00.850 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:19:00.850 "is_configured": true, 00:19:00.850 "data_offset": 256, 00:19:00.850 "data_size": 7936 00:19:00.850 } 00:19:00.850 ] 00:19:00.850 }' 00:19:00.850 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.850 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.422 [2024-11-05 11:35:00.472060] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:01.422 [2024-11-05 11:35:00.472097] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:01.422 [2024-11-05 11:35:00.472186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.422 [2024-11-05 11:35:00.472251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.422 [2024-11-05 11:35:00.472262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:01.422 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:01.423 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:01.423 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:01.423 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:01.423 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:01.423 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:01.682 /dev/nbd0 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:01.682 1+0 records in 00:19:01.682 1+0 records out 00:19:01.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041172 s, 9.9 MB/s 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:01.682 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:01.942 /dev/nbd1 00:19:01.942 11:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:01.942 1+0 records in 00:19:01.942 1+0 records out 00:19:01.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455614 s, 9.0 MB/s 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:01.942 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:02.202 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:02.202 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:02.202 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:02.202 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:02.202 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:02.202 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:02.202 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:02.202 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:02.202 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:02.202 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:02.461 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:02.461 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:02.461 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:02.461 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:02.461 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:02.461 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:02.461 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:02.461 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:02.461 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:02.462 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:02.462 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.462 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.462 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.462 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:02.462 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.462 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.462 [2024-11-05 11:35:01.645512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:02.462 [2024-11-05 11:35:01.645566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.462 [2024-11-05 11:35:01.645589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:02.462 [2024-11-05 11:35:01.645597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.462 [2024-11-05 11:35:01.647710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.462 [2024-11-05 11:35:01.647748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:02.462 [2024-11-05 11:35:01.647838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:02.462 [2024-11-05 11:35:01.647888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:02.462 [2024-11-05 11:35:01.648063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:02.462 spare 00:19:02.462 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.462 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:02.462 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.462 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.720 [2024-11-05 11:35:01.747975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:02.720 [2024-11-05 11:35:01.748007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:02.720 [2024-11-05 11:35:01.748278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:02.720 [2024-11-05 11:35:01.748443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:02.720 [2024-11-05 11:35:01.748463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:02.720 [2024-11-05 11:35:01.748644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.720 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.720 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:02.720 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.720 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.720 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.720 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.720 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:02.720 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.720 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.720 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.720 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.721 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.721 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.721 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.721 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.721 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.721 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.721 "name": "raid_bdev1", 00:19:02.721 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:19:02.721 "strip_size_kb": 0, 00:19:02.721 "state": "online", 00:19:02.721 "raid_level": "raid1", 00:19:02.721 "superblock": true, 00:19:02.721 "num_base_bdevs": 2, 00:19:02.721 "num_base_bdevs_discovered": 2, 00:19:02.721 "num_base_bdevs_operational": 2, 00:19:02.721 "base_bdevs_list": [ 00:19:02.721 { 00:19:02.721 "name": "spare", 00:19:02.721 "uuid": "7d449fc2-bb83-5aef-bc4e-5f6c50c37473", 00:19:02.721 "is_configured": true, 00:19:02.721 "data_offset": 256, 00:19:02.721 "data_size": 7936 00:19:02.721 }, 00:19:02.721 { 00:19:02.721 "name": "BaseBdev2", 00:19:02.721 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:19:02.721 "is_configured": true, 00:19:02.721 "data_offset": 256, 00:19:02.721 "data_size": 7936 00:19:02.721 } 00:19:02.721 ] 00:19:02.721 }' 00:19:02.721 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.721 11:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.980 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:02.980 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.980 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:02.980 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:02.980 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.980 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.980 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.980 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.980 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.980 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.243 "name": "raid_bdev1", 00:19:03.243 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:19:03.243 "strip_size_kb": 0, 00:19:03.243 "state": "online", 00:19:03.243 "raid_level": "raid1", 00:19:03.243 "superblock": true, 00:19:03.243 "num_base_bdevs": 2, 00:19:03.243 "num_base_bdevs_discovered": 2, 00:19:03.243 "num_base_bdevs_operational": 2, 00:19:03.243 "base_bdevs_list": [ 00:19:03.243 { 00:19:03.243 "name": "spare", 00:19:03.243 "uuid": "7d449fc2-bb83-5aef-bc4e-5f6c50c37473", 00:19:03.243 "is_configured": true, 00:19:03.243 "data_offset": 256, 00:19:03.243 "data_size": 7936 00:19:03.243 }, 00:19:03.243 { 00:19:03.243 "name": "BaseBdev2", 00:19:03.243 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:19:03.243 "is_configured": true, 00:19:03.243 "data_offset": 256, 00:19:03.243 "data_size": 7936 00:19:03.243 } 00:19:03.243 ] 00:19:03.243 }' 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.243 [2024-11-05 11:35:02.400293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.243 "name": "raid_bdev1", 00:19:03.243 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:19:03.243 "strip_size_kb": 0, 00:19:03.243 "state": "online", 00:19:03.243 "raid_level": "raid1", 00:19:03.243 "superblock": true, 00:19:03.243 "num_base_bdevs": 2, 00:19:03.243 "num_base_bdevs_discovered": 1, 00:19:03.243 "num_base_bdevs_operational": 1, 00:19:03.243 "base_bdevs_list": [ 00:19:03.243 { 00:19:03.243 "name": null, 00:19:03.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.243 "is_configured": false, 00:19:03.243 "data_offset": 0, 00:19:03.243 "data_size": 7936 00:19:03.243 }, 00:19:03.243 { 00:19:03.243 "name": "BaseBdev2", 00:19:03.243 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:19:03.243 "is_configured": true, 00:19:03.243 "data_offset": 256, 00:19:03.243 "data_size": 7936 00:19:03.243 } 00:19:03.243 ] 00:19:03.243 }' 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.243 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.810 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:03.810 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.810 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.810 [2024-11-05 11:35:02.867506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:03.810 [2024-11-05 11:35:02.867677] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:03.810 [2024-11-05 11:35:02.867695] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:03.810 [2024-11-05 11:35:02.867725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:03.810 [2024-11-05 11:35:02.882863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:03.810 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.810 11:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:03.810 [2024-11-05 11:35:02.884662] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:04.748 11:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.748 11:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.748 11:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:04.748 11:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:04.748 11:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.748 11:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.748 11:35:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.748 11:35:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.748 11:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.748 11:35:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.748 11:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.748 "name": "raid_bdev1", 00:19:04.748 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:19:04.748 "strip_size_kb": 0, 00:19:04.748 "state": "online", 00:19:04.748 "raid_level": "raid1", 00:19:04.748 "superblock": true, 00:19:04.748 "num_base_bdevs": 2, 00:19:04.748 "num_base_bdevs_discovered": 2, 00:19:04.748 "num_base_bdevs_operational": 2, 00:19:04.748 "process": { 00:19:04.748 "type": "rebuild", 00:19:04.748 "target": "spare", 00:19:04.748 "progress": { 00:19:04.748 "blocks": 2560, 00:19:04.748 "percent": 32 00:19:04.748 } 00:19:04.748 }, 00:19:04.748 "base_bdevs_list": [ 00:19:04.748 { 00:19:04.748 "name": "spare", 00:19:04.748 "uuid": "7d449fc2-bb83-5aef-bc4e-5f6c50c37473", 00:19:04.748 "is_configured": true, 00:19:04.748 "data_offset": 256, 00:19:04.748 "data_size": 7936 00:19:04.748 }, 00:19:04.748 { 00:19:04.748 "name": "BaseBdev2", 00:19:04.748 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:19:04.748 "is_configured": true, 00:19:04.748 "data_offset": 256, 00:19:04.748 "data_size": 7936 00:19:04.748 } 00:19:04.748 ] 00:19:04.748 }' 00:19:04.748 11:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.748 11:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.748 11:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.008 [2024-11-05 11:35:04.044566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.008 [2024-11-05 11:35:04.089412] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:05.008 [2024-11-05 11:35:04.089468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.008 [2024-11-05 11:35:04.089481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.008 [2024-11-05 11:35:04.089489] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.008 "name": "raid_bdev1", 00:19:05.008 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:19:05.008 "strip_size_kb": 0, 00:19:05.008 "state": "online", 00:19:05.008 "raid_level": "raid1", 00:19:05.008 "superblock": true, 00:19:05.008 "num_base_bdevs": 2, 00:19:05.008 "num_base_bdevs_discovered": 1, 00:19:05.008 "num_base_bdevs_operational": 1, 00:19:05.008 "base_bdevs_list": [ 00:19:05.008 { 00:19:05.008 "name": null, 00:19:05.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.008 "is_configured": false, 00:19:05.008 "data_offset": 0, 00:19:05.008 "data_size": 7936 00:19:05.008 }, 00:19:05.008 { 00:19:05.008 "name": "BaseBdev2", 00:19:05.008 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:19:05.008 "is_configured": true, 00:19:05.008 "data_offset": 256, 00:19:05.008 "data_size": 7936 00:19:05.008 } 00:19:05.008 ] 00:19:05.008 }' 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.008 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.577 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:05.577 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.577 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.577 [2024-11-05 11:35:04.609711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:05.577 [2024-11-05 11:35:04.609778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.577 [2024-11-05 11:35:04.609799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:05.577 [2024-11-05 11:35:04.609811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.577 [2024-11-05 11:35:04.610313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.577 [2024-11-05 11:35:04.610336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:05.577 [2024-11-05 11:35:04.610426] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:05.577 [2024-11-05 11:35:04.610442] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:05.577 [2024-11-05 11:35:04.610451] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:05.577 [2024-11-05 11:35:04.610477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:05.577 [2024-11-05 11:35:04.625710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:05.577 spare 00:19:05.577 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.577 11:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:05.577 [2024-11-05 11:35:04.627515] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.515 "name": "raid_bdev1", 00:19:06.515 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:19:06.515 "strip_size_kb": 0, 00:19:06.515 "state": "online", 00:19:06.515 "raid_level": "raid1", 00:19:06.515 "superblock": true, 00:19:06.515 "num_base_bdevs": 2, 00:19:06.515 "num_base_bdevs_discovered": 2, 00:19:06.515 "num_base_bdevs_operational": 2, 00:19:06.515 "process": { 00:19:06.515 "type": "rebuild", 00:19:06.515 "target": "spare", 00:19:06.515 "progress": { 00:19:06.515 "blocks": 2560, 00:19:06.515 "percent": 32 00:19:06.515 } 00:19:06.515 }, 00:19:06.515 "base_bdevs_list": [ 00:19:06.515 { 00:19:06.515 "name": "spare", 00:19:06.515 "uuid": "7d449fc2-bb83-5aef-bc4e-5f6c50c37473", 00:19:06.515 "is_configured": true, 00:19:06.515 "data_offset": 256, 00:19:06.515 "data_size": 7936 00:19:06.515 }, 00:19:06.515 { 00:19:06.515 "name": "BaseBdev2", 00:19:06.515 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:19:06.515 "is_configured": true, 00:19:06.515 "data_offset": 256, 00:19:06.515 "data_size": 7936 00:19:06.515 } 00:19:06.515 ] 00:19:06.515 }' 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.515 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.774 [2024-11-05 11:35:05.791804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.774 [2024-11-05 11:35:05.832192] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:06.774 [2024-11-05 11:35:05.832245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.774 [2024-11-05 11:35:05.832262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.774 [2024-11-05 11:35:05.832268] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.774 "name": "raid_bdev1", 00:19:06.774 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:19:06.774 "strip_size_kb": 0, 00:19:06.774 "state": "online", 00:19:06.774 "raid_level": "raid1", 00:19:06.774 "superblock": true, 00:19:06.774 "num_base_bdevs": 2, 00:19:06.774 "num_base_bdevs_discovered": 1, 00:19:06.774 "num_base_bdevs_operational": 1, 00:19:06.774 "base_bdevs_list": [ 00:19:06.774 { 00:19:06.774 "name": null, 00:19:06.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.774 "is_configured": false, 00:19:06.774 "data_offset": 0, 00:19:06.774 "data_size": 7936 00:19:06.774 }, 00:19:06.774 { 00:19:06.774 "name": "BaseBdev2", 00:19:06.774 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:19:06.774 "is_configured": true, 00:19:06.774 "data_offset": 256, 00:19:06.774 "data_size": 7936 00:19:06.774 } 00:19:06.774 ] 00:19:06.774 }' 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.774 11:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.034 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:07.034 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.034 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:07.034 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:07.034 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.034 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.034 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.034 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.034 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.034 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.293 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.293 "name": "raid_bdev1", 00:19:07.293 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:19:07.293 "strip_size_kb": 0, 00:19:07.293 "state": "online", 00:19:07.293 "raid_level": "raid1", 00:19:07.293 "superblock": true, 00:19:07.293 "num_base_bdevs": 2, 00:19:07.293 "num_base_bdevs_discovered": 1, 00:19:07.293 "num_base_bdevs_operational": 1, 00:19:07.293 "base_bdevs_list": [ 00:19:07.293 { 00:19:07.293 "name": null, 00:19:07.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.293 "is_configured": false, 00:19:07.293 "data_offset": 0, 00:19:07.293 "data_size": 7936 00:19:07.293 }, 00:19:07.293 { 00:19:07.293 "name": "BaseBdev2", 00:19:07.293 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:19:07.293 "is_configured": true, 00:19:07.293 "data_offset": 256, 00:19:07.293 "data_size": 7936 00:19:07.293 } 00:19:07.293 ] 00:19:07.293 }' 00:19:07.293 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.293 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:07.293 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.293 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:07.293 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:07.293 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.293 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.293 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.293 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:07.293 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.293 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.293 [2024-11-05 11:35:06.404152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:07.293 [2024-11-05 11:35:06.404201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.293 [2024-11-05 11:35:06.404222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:07.293 [2024-11-05 11:35:06.404240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.293 [2024-11-05 11:35:06.404684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.293 [2024-11-05 11:35:06.404711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:07.293 [2024-11-05 11:35:06.404785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:07.293 [2024-11-05 11:35:06.404798] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:07.293 [2024-11-05 11:35:06.404808] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:07.293 [2024-11-05 11:35:06.404818] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:07.293 BaseBdev1 00:19:07.293 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.293 11:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:08.230 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:08.230 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.230 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.230 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.230 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.230 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:08.230 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.230 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.231 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.231 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.231 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.231 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.231 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.231 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.231 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.231 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.231 "name": "raid_bdev1", 00:19:08.231 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:19:08.231 "strip_size_kb": 0, 00:19:08.231 "state": "online", 00:19:08.231 "raid_level": "raid1", 00:19:08.231 "superblock": true, 00:19:08.231 "num_base_bdevs": 2, 00:19:08.231 "num_base_bdevs_discovered": 1, 00:19:08.231 "num_base_bdevs_operational": 1, 00:19:08.231 "base_bdevs_list": [ 00:19:08.231 { 00:19:08.231 "name": null, 00:19:08.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.231 "is_configured": false, 00:19:08.231 "data_offset": 0, 00:19:08.231 "data_size": 7936 00:19:08.231 }, 00:19:08.231 { 00:19:08.231 "name": "BaseBdev2", 00:19:08.231 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:19:08.231 "is_configured": true, 00:19:08.231 "data_offset": 256, 00:19:08.231 "data_size": 7936 00:19:08.231 } 00:19:08.231 ] 00:19:08.231 }' 00:19:08.231 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.231 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.800 "name": "raid_bdev1", 00:19:08.800 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:19:08.800 "strip_size_kb": 0, 00:19:08.800 "state": "online", 00:19:08.800 "raid_level": "raid1", 00:19:08.800 "superblock": true, 00:19:08.800 "num_base_bdevs": 2, 00:19:08.800 "num_base_bdevs_discovered": 1, 00:19:08.800 "num_base_bdevs_operational": 1, 00:19:08.800 "base_bdevs_list": [ 00:19:08.800 { 00:19:08.800 "name": null, 00:19:08.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.800 "is_configured": false, 00:19:08.800 "data_offset": 0, 00:19:08.800 "data_size": 7936 00:19:08.800 }, 00:19:08.800 { 00:19:08.800 "name": "BaseBdev2", 00:19:08.800 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:19:08.800 "is_configured": true, 00:19:08.800 "data_offset": 256, 00:19:08.800 "data_size": 7936 00:19:08.800 } 00:19:08.800 ] 00:19:08.800 }' 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.800 [2024-11-05 11:35:07.985449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:08.800 [2024-11-05 11:35:07.985616] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:08.800 [2024-11-05 11:35:07.985631] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:08.800 request: 00:19:08.800 { 00:19:08.800 "base_bdev": "BaseBdev1", 00:19:08.800 "raid_bdev": "raid_bdev1", 00:19:08.800 "method": "bdev_raid_add_base_bdev", 00:19:08.800 "req_id": 1 00:19:08.800 } 00:19:08.800 Got JSON-RPC error response 00:19:08.800 response: 00:19:08.800 { 00:19:08.800 "code": -22, 00:19:08.800 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:08.800 } 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:08.800 11:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:09.737 11:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:09.737 11:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.737 11:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.737 11:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.737 11:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.737 11:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:09.737 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.737 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.737 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.737 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.737 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.737 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.737 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.737 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.997 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.997 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.997 "name": "raid_bdev1", 00:19:09.997 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:19:09.997 "strip_size_kb": 0, 00:19:09.997 "state": "online", 00:19:09.997 "raid_level": "raid1", 00:19:09.997 "superblock": true, 00:19:09.997 "num_base_bdevs": 2, 00:19:09.997 "num_base_bdevs_discovered": 1, 00:19:09.997 "num_base_bdevs_operational": 1, 00:19:09.997 "base_bdevs_list": [ 00:19:09.997 { 00:19:09.997 "name": null, 00:19:09.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.997 "is_configured": false, 00:19:09.997 "data_offset": 0, 00:19:09.997 "data_size": 7936 00:19:09.997 }, 00:19:09.997 { 00:19:09.997 "name": "BaseBdev2", 00:19:09.997 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:19:09.997 "is_configured": true, 00:19:09.997 "data_offset": 256, 00:19:09.997 "data_size": 7936 00:19:09.997 } 00:19:09.997 ] 00:19:09.997 }' 00:19:09.997 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.997 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.256 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:10.256 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.257 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:10.257 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:10.257 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.257 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.257 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.257 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.257 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.257 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.257 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.257 "name": "raid_bdev1", 00:19:10.257 "uuid": "9e222d80-cced-4cc3-a10d-a5ddce4cf0a5", 00:19:10.257 "strip_size_kb": 0, 00:19:10.257 "state": "online", 00:19:10.257 "raid_level": "raid1", 00:19:10.257 "superblock": true, 00:19:10.257 "num_base_bdevs": 2, 00:19:10.257 "num_base_bdevs_discovered": 1, 00:19:10.257 "num_base_bdevs_operational": 1, 00:19:10.257 "base_bdevs_list": [ 00:19:10.257 { 00:19:10.257 "name": null, 00:19:10.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.257 "is_configured": false, 00:19:10.257 "data_offset": 0, 00:19:10.257 "data_size": 7936 00:19:10.257 }, 00:19:10.257 { 00:19:10.257 "name": "BaseBdev2", 00:19:10.257 "uuid": "f52b6e14-2433-547c-8668-a6b4517c2159", 00:19:10.257 "is_configured": true, 00:19:10.257 "data_offset": 256, 00:19:10.257 "data_size": 7936 00:19:10.257 } 00:19:10.257 ] 00:19:10.257 }' 00:19:10.257 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.524 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:10.524 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.524 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:10.524 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86509 00:19:10.524 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86509 ']' 00:19:10.524 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86509 00:19:10.524 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:19:10.524 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:10.524 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86509 00:19:10.524 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:10.524 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:10.524 killing process with pid 86509 00:19:10.524 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86509' 00:19:10.524 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86509 00:19:10.524 Received shutdown signal, test time was about 60.000000 seconds 00:19:10.524 00:19:10.524 Latency(us) 00:19:10.524 [2024-11-05T11:35:09.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.524 [2024-11-05T11:35:09.798Z] =================================================================================================================== 00:19:10.524 [2024-11-05T11:35:09.798Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:10.524 [2024-11-05 11:35:09.645958] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:10.524 [2024-11-05 11:35:09.646078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.524 [2024-11-05 11:35:09.646144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:10.524 11:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86509 00:19:10.524 [2024-11-05 11:35:09.646156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:10.784 [2024-11-05 11:35:09.937584] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:11.724 11:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:19:11.724 00:19:11.724 real 0m19.620s 00:19:11.724 user 0m25.563s 00:19:11.724 sys 0m2.656s 00:19:11.724 11:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:11.724 11:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.724 ************************************ 00:19:11.724 END TEST raid_rebuild_test_sb_4k 00:19:11.724 ************************************ 00:19:11.984 11:35:11 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:19:11.984 11:35:11 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:19:11.984 11:35:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:11.984 11:35:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:11.984 11:35:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.984 ************************************ 00:19:11.984 START TEST raid_state_function_test_sb_md_separate 00:19:11.984 ************************************ 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87198 00:19:11.984 Process raid pid: 87198 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87198' 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87198 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87198 ']' 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:11.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.984 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.985 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:11.985 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:11.985 [2024-11-05 11:35:11.133221] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:19:11.985 [2024-11-05 11:35:11.133352] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.244 [2024-11-05 11:35:11.313171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.244 [2024-11-05 11:35:11.418868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.504 [2024-11-05 11:35:11.614470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.504 [2024-11-05 11:35:11.614507] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.764 [2024-11-05 11:35:11.937711] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:12.764 [2024-11-05 11:35:11.937762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:12.764 [2024-11-05 11:35:11.937772] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.764 [2024-11-05 11:35:11.937781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.764 "name": "Existed_Raid", 00:19:12.764 "uuid": "e585ec89-947e-4999-8607-53b45eaad5c7", 00:19:12.764 "strip_size_kb": 0, 00:19:12.764 "state": "configuring", 00:19:12.764 "raid_level": "raid1", 00:19:12.764 "superblock": true, 00:19:12.764 "num_base_bdevs": 2, 00:19:12.764 "num_base_bdevs_discovered": 0, 00:19:12.764 "num_base_bdevs_operational": 2, 00:19:12.764 "base_bdevs_list": [ 00:19:12.764 { 00:19:12.764 "name": "BaseBdev1", 00:19:12.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.764 "is_configured": false, 00:19:12.764 "data_offset": 0, 00:19:12.764 "data_size": 0 00:19:12.764 }, 00:19:12.764 { 00:19:12.764 "name": "BaseBdev2", 00:19:12.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.764 "is_configured": false, 00:19:12.764 "data_offset": 0, 00:19:12.764 "data_size": 0 00:19:12.764 } 00:19:12.764 ] 00:19:12.764 }' 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.764 11:35:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.334 [2024-11-05 11:35:12.412846] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:13.334 [2024-11-05 11:35:12.412887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.334 [2024-11-05 11:35:12.424817] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:13.334 [2024-11-05 11:35:12.424857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:13.334 [2024-11-05 11:35:12.424864] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:13.334 [2024-11-05 11:35:12.424875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.334 [2024-11-05 11:35:12.471897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:13.334 BaseBdev1 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.334 [ 00:19:13.334 { 00:19:13.334 "name": "BaseBdev1", 00:19:13.334 "aliases": [ 00:19:13.334 "f744ea31-8ed3-44a0-921d-6ed2edb7d150" 00:19:13.334 ], 00:19:13.334 "product_name": "Malloc disk", 00:19:13.334 "block_size": 4096, 00:19:13.334 "num_blocks": 8192, 00:19:13.334 "uuid": "f744ea31-8ed3-44a0-921d-6ed2edb7d150", 00:19:13.334 "md_size": 32, 00:19:13.334 "md_interleave": false, 00:19:13.334 "dif_type": 0, 00:19:13.334 "assigned_rate_limits": { 00:19:13.334 "rw_ios_per_sec": 0, 00:19:13.334 "rw_mbytes_per_sec": 0, 00:19:13.334 "r_mbytes_per_sec": 0, 00:19:13.334 "w_mbytes_per_sec": 0 00:19:13.334 }, 00:19:13.334 "claimed": true, 00:19:13.334 "claim_type": "exclusive_write", 00:19:13.334 "zoned": false, 00:19:13.334 "supported_io_types": { 00:19:13.334 "read": true, 00:19:13.334 "write": true, 00:19:13.334 "unmap": true, 00:19:13.334 "flush": true, 00:19:13.334 "reset": true, 00:19:13.334 "nvme_admin": false, 00:19:13.334 "nvme_io": false, 00:19:13.334 "nvme_io_md": false, 00:19:13.334 "write_zeroes": true, 00:19:13.334 "zcopy": true, 00:19:13.334 "get_zone_info": false, 00:19:13.334 "zone_management": false, 00:19:13.334 "zone_append": false, 00:19:13.334 "compare": false, 00:19:13.334 "compare_and_write": false, 00:19:13.334 "abort": true, 00:19:13.334 "seek_hole": false, 00:19:13.334 "seek_data": false, 00:19:13.334 "copy": true, 00:19:13.334 "nvme_iov_md": false 00:19:13.334 }, 00:19:13.334 "memory_domains": [ 00:19:13.334 { 00:19:13.334 "dma_device_id": "system", 00:19:13.334 "dma_device_type": 1 00:19:13.334 }, 00:19:13.334 { 00:19:13.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.334 "dma_device_type": 2 00:19:13.334 } 00:19:13.334 ], 00:19:13.334 "driver_specific": {} 00:19:13.334 } 00:19:13.334 ] 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.334 "name": "Existed_Raid", 00:19:13.334 "uuid": "91552c21-8fd1-49a9-af33-cf79bed7beb3", 00:19:13.334 "strip_size_kb": 0, 00:19:13.334 "state": "configuring", 00:19:13.334 "raid_level": "raid1", 00:19:13.334 "superblock": true, 00:19:13.334 "num_base_bdevs": 2, 00:19:13.334 "num_base_bdevs_discovered": 1, 00:19:13.334 "num_base_bdevs_operational": 2, 00:19:13.334 "base_bdevs_list": [ 00:19:13.334 { 00:19:13.334 "name": "BaseBdev1", 00:19:13.334 "uuid": "f744ea31-8ed3-44a0-921d-6ed2edb7d150", 00:19:13.334 "is_configured": true, 00:19:13.334 "data_offset": 256, 00:19:13.334 "data_size": 7936 00:19:13.334 }, 00:19:13.334 { 00:19:13.334 "name": "BaseBdev2", 00:19:13.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.334 "is_configured": false, 00:19:13.334 "data_offset": 0, 00:19:13.334 "data_size": 0 00:19:13.334 } 00:19:13.334 ] 00:19:13.334 }' 00:19:13.334 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.335 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.904 [2024-11-05 11:35:12.943225] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:13.904 [2024-11-05 11:35:12.943267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.904 [2024-11-05 11:35:12.951274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:13.904 [2024-11-05 11:35:12.953035] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:13.904 [2024-11-05 11:35:12.953074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.904 11:35:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.904 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.904 "name": "Existed_Raid", 00:19:13.904 "uuid": "c6d97632-67ea-47dd-95c7-a2c990560ae9", 00:19:13.904 "strip_size_kb": 0, 00:19:13.904 "state": "configuring", 00:19:13.904 "raid_level": "raid1", 00:19:13.904 "superblock": true, 00:19:13.904 "num_base_bdevs": 2, 00:19:13.904 "num_base_bdevs_discovered": 1, 00:19:13.904 "num_base_bdevs_operational": 2, 00:19:13.904 "base_bdevs_list": [ 00:19:13.904 { 00:19:13.904 "name": "BaseBdev1", 00:19:13.904 "uuid": "f744ea31-8ed3-44a0-921d-6ed2edb7d150", 00:19:13.904 "is_configured": true, 00:19:13.904 "data_offset": 256, 00:19:13.904 "data_size": 7936 00:19:13.904 }, 00:19:13.904 { 00:19:13.904 "name": "BaseBdev2", 00:19:13.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.904 "is_configured": false, 00:19:13.904 "data_offset": 0, 00:19:13.904 "data_size": 0 00:19:13.904 } 00:19:13.904 ] 00:19:13.904 }' 00:19:13.904 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.904 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.164 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:19:14.164 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.164 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.424 [2024-11-05 11:35:13.440467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:14.424 [2024-11-05 11:35:13.440707] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:14.424 [2024-11-05 11:35:13.440722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:14.424 [2024-11-05 11:35:13.440807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:14.424 [2024-11-05 11:35:13.440931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:14.424 [2024-11-05 11:35:13.440948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:14.424 [2024-11-05 11:35:13.441046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.424 BaseBdev2 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.424 [ 00:19:14.424 { 00:19:14.424 "name": "BaseBdev2", 00:19:14.424 "aliases": [ 00:19:14.424 "074b2b6a-145e-4084-bad8-ae6bf20fe391" 00:19:14.424 ], 00:19:14.424 "product_name": "Malloc disk", 00:19:14.424 "block_size": 4096, 00:19:14.424 "num_blocks": 8192, 00:19:14.424 "uuid": "074b2b6a-145e-4084-bad8-ae6bf20fe391", 00:19:14.424 "md_size": 32, 00:19:14.424 "md_interleave": false, 00:19:14.424 "dif_type": 0, 00:19:14.424 "assigned_rate_limits": { 00:19:14.424 "rw_ios_per_sec": 0, 00:19:14.424 "rw_mbytes_per_sec": 0, 00:19:14.424 "r_mbytes_per_sec": 0, 00:19:14.424 "w_mbytes_per_sec": 0 00:19:14.424 }, 00:19:14.424 "claimed": true, 00:19:14.424 "claim_type": "exclusive_write", 00:19:14.424 "zoned": false, 00:19:14.424 "supported_io_types": { 00:19:14.424 "read": true, 00:19:14.424 "write": true, 00:19:14.424 "unmap": true, 00:19:14.424 "flush": true, 00:19:14.424 "reset": true, 00:19:14.424 "nvme_admin": false, 00:19:14.424 "nvme_io": false, 00:19:14.424 "nvme_io_md": false, 00:19:14.424 "write_zeroes": true, 00:19:14.424 "zcopy": true, 00:19:14.424 "get_zone_info": false, 00:19:14.424 "zone_management": false, 00:19:14.424 "zone_append": false, 00:19:14.424 "compare": false, 00:19:14.424 "compare_and_write": false, 00:19:14.424 "abort": true, 00:19:14.424 "seek_hole": false, 00:19:14.424 "seek_data": false, 00:19:14.424 "copy": true, 00:19:14.424 "nvme_iov_md": false 00:19:14.424 }, 00:19:14.424 "memory_domains": [ 00:19:14.424 { 00:19:14.424 "dma_device_id": "system", 00:19:14.424 "dma_device_type": 1 00:19:14.424 }, 00:19:14.424 { 00:19:14.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.424 "dma_device_type": 2 00:19:14.424 } 00:19:14.424 ], 00:19:14.424 "driver_specific": {} 00:19:14.424 } 00:19:14.424 ] 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:14.424 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.425 "name": "Existed_Raid", 00:19:14.425 "uuid": "c6d97632-67ea-47dd-95c7-a2c990560ae9", 00:19:14.425 "strip_size_kb": 0, 00:19:14.425 "state": "online", 00:19:14.425 "raid_level": "raid1", 00:19:14.425 "superblock": true, 00:19:14.425 "num_base_bdevs": 2, 00:19:14.425 "num_base_bdevs_discovered": 2, 00:19:14.425 "num_base_bdevs_operational": 2, 00:19:14.425 "base_bdevs_list": [ 00:19:14.425 { 00:19:14.425 "name": "BaseBdev1", 00:19:14.425 "uuid": "f744ea31-8ed3-44a0-921d-6ed2edb7d150", 00:19:14.425 "is_configured": true, 00:19:14.425 "data_offset": 256, 00:19:14.425 "data_size": 7936 00:19:14.425 }, 00:19:14.425 { 00:19:14.425 "name": "BaseBdev2", 00:19:14.425 "uuid": "074b2b6a-145e-4084-bad8-ae6bf20fe391", 00:19:14.425 "is_configured": true, 00:19:14.425 "data_offset": 256, 00:19:14.425 "data_size": 7936 00:19:14.425 } 00:19:14.425 ] 00:19:14.425 }' 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.425 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.684 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:14.684 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:14.684 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:14.684 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:14.684 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:14.684 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:14.684 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:14.684 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.684 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.684 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:14.684 [2024-11-05 11:35:13.951915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:14.945 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.945 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:14.945 "name": "Existed_Raid", 00:19:14.945 "aliases": [ 00:19:14.945 "c6d97632-67ea-47dd-95c7-a2c990560ae9" 00:19:14.945 ], 00:19:14.945 "product_name": "Raid Volume", 00:19:14.945 "block_size": 4096, 00:19:14.945 "num_blocks": 7936, 00:19:14.945 "uuid": "c6d97632-67ea-47dd-95c7-a2c990560ae9", 00:19:14.945 "md_size": 32, 00:19:14.945 "md_interleave": false, 00:19:14.945 "dif_type": 0, 00:19:14.945 "assigned_rate_limits": { 00:19:14.945 "rw_ios_per_sec": 0, 00:19:14.945 "rw_mbytes_per_sec": 0, 00:19:14.945 "r_mbytes_per_sec": 0, 00:19:14.945 "w_mbytes_per_sec": 0 00:19:14.945 }, 00:19:14.945 "claimed": false, 00:19:14.945 "zoned": false, 00:19:14.945 "supported_io_types": { 00:19:14.945 "read": true, 00:19:14.945 "write": true, 00:19:14.945 "unmap": false, 00:19:14.945 "flush": false, 00:19:14.945 "reset": true, 00:19:14.945 "nvme_admin": false, 00:19:14.945 "nvme_io": false, 00:19:14.945 "nvme_io_md": false, 00:19:14.945 "write_zeroes": true, 00:19:14.945 "zcopy": false, 00:19:14.945 "get_zone_info": false, 00:19:14.945 "zone_management": false, 00:19:14.945 "zone_append": false, 00:19:14.945 "compare": false, 00:19:14.945 "compare_and_write": false, 00:19:14.945 "abort": false, 00:19:14.945 "seek_hole": false, 00:19:14.945 "seek_data": false, 00:19:14.945 "copy": false, 00:19:14.945 "nvme_iov_md": false 00:19:14.945 }, 00:19:14.945 "memory_domains": [ 00:19:14.945 { 00:19:14.945 "dma_device_id": "system", 00:19:14.945 "dma_device_type": 1 00:19:14.945 }, 00:19:14.945 { 00:19:14.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.945 "dma_device_type": 2 00:19:14.945 }, 00:19:14.945 { 00:19:14.945 "dma_device_id": "system", 00:19:14.945 "dma_device_type": 1 00:19:14.945 }, 00:19:14.945 { 00:19:14.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.945 "dma_device_type": 2 00:19:14.945 } 00:19:14.945 ], 00:19:14.945 "driver_specific": { 00:19:14.945 "raid": { 00:19:14.945 "uuid": "c6d97632-67ea-47dd-95c7-a2c990560ae9", 00:19:14.945 "strip_size_kb": 0, 00:19:14.945 "state": "online", 00:19:14.945 "raid_level": "raid1", 00:19:14.945 "superblock": true, 00:19:14.945 "num_base_bdevs": 2, 00:19:14.945 "num_base_bdevs_discovered": 2, 00:19:14.945 "num_base_bdevs_operational": 2, 00:19:14.945 "base_bdevs_list": [ 00:19:14.945 { 00:19:14.945 "name": "BaseBdev1", 00:19:14.945 "uuid": "f744ea31-8ed3-44a0-921d-6ed2edb7d150", 00:19:14.945 "is_configured": true, 00:19:14.945 "data_offset": 256, 00:19:14.945 "data_size": 7936 00:19:14.945 }, 00:19:14.945 { 00:19:14.945 "name": "BaseBdev2", 00:19:14.945 "uuid": "074b2b6a-145e-4084-bad8-ae6bf20fe391", 00:19:14.945 "is_configured": true, 00:19:14.945 "data_offset": 256, 00:19:14.945 "data_size": 7936 00:19:14.945 } 00:19:14.945 ] 00:19:14.945 } 00:19:14.945 } 00:19:14.945 }' 00:19:14.945 11:35:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:14.945 BaseBdev2' 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.945 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.945 [2024-11-05 11:35:14.175326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.205 "name": "Existed_Raid", 00:19:15.205 "uuid": "c6d97632-67ea-47dd-95c7-a2c990560ae9", 00:19:15.205 "strip_size_kb": 0, 00:19:15.205 "state": "online", 00:19:15.205 "raid_level": "raid1", 00:19:15.205 "superblock": true, 00:19:15.205 "num_base_bdevs": 2, 00:19:15.205 "num_base_bdevs_discovered": 1, 00:19:15.205 "num_base_bdevs_operational": 1, 00:19:15.205 "base_bdevs_list": [ 00:19:15.205 { 00:19:15.205 "name": null, 00:19:15.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.205 "is_configured": false, 00:19:15.205 "data_offset": 0, 00:19:15.205 "data_size": 7936 00:19:15.205 }, 00:19:15.205 { 00:19:15.205 "name": "BaseBdev2", 00:19:15.205 "uuid": "074b2b6a-145e-4084-bad8-ae6bf20fe391", 00:19:15.205 "is_configured": true, 00:19:15.205 "data_offset": 256, 00:19:15.205 "data_size": 7936 00:19:15.205 } 00:19:15.205 ] 00:19:15.205 }' 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.205 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.465 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:15.465 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:15.465 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.465 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.465 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.465 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.725 [2024-11-05 11:35:14.781188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:15.725 [2024-11-05 11:35:14.781290] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:15.725 [2024-11-05 11:35:14.874478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.725 [2024-11-05 11:35:14.874529] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.725 [2024-11-05 11:35:14.874540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87198 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87198 ']' 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87198 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87198 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:15.725 killing process with pid 87198 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87198' 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87198 00:19:15.725 [2024-11-05 11:35:14.963419] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:15.725 11:35:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87198 00:19:15.725 [2024-11-05 11:35:14.979044] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:17.106 11:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:19:17.106 00:19:17.106 real 0m4.974s 00:19:17.106 user 0m7.211s 00:19:17.106 sys 0m0.904s 00:19:17.106 11:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:17.106 11:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.106 ************************************ 00:19:17.106 END TEST raid_state_function_test_sb_md_separate 00:19:17.106 ************************************ 00:19:17.106 11:35:16 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:19:17.106 11:35:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:17.106 11:35:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:17.106 11:35:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:17.106 ************************************ 00:19:17.106 START TEST raid_superblock_test_md_separate 00:19:17.106 ************************************ 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87446 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87446 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87446 ']' 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:17.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:17.106 11:35:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.106 [2024-11-05 11:35:16.172579] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:19:17.106 [2024-11-05 11:35:16.172688] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87446 ] 00:19:17.106 [2024-11-05 11:35:16.346184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.366 [2024-11-05 11:35:16.453147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.366 [2024-11-05 11:35:16.641312] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.366 [2024-11-05 11:35:16.641347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.936 11:35:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:17.936 11:35:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:19:17.936 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:17.936 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:17.936 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:17.936 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:17.936 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:17.936 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:17.936 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:17.936 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:17.936 11:35:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:19:17.936 11:35:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.936 11:35:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.936 malloc1 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.936 [2024-11-05 11:35:17.030817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:17.936 [2024-11-05 11:35:17.030880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.936 [2024-11-05 11:35:17.030900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:17.936 [2024-11-05 11:35:17.030910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.936 [2024-11-05 11:35:17.032687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.936 [2024-11-05 11:35:17.032723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:17.936 pt1 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.936 malloc2 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.936 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.936 [2024-11-05 11:35:17.083713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:17.936 [2024-11-05 11:35:17.083763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.937 [2024-11-05 11:35:17.083782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:17.937 [2024-11-05 11:35:17.083790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.937 [2024-11-05 11:35:17.085470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.937 [2024-11-05 11:35:17.085503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:17.937 pt2 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.937 [2024-11-05 11:35:17.095716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:17.937 [2024-11-05 11:35:17.097326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:17.937 [2024-11-05 11:35:17.097493] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:17.937 [2024-11-05 11:35:17.097514] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:17.937 [2024-11-05 11:35:17.097590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:17.937 [2024-11-05 11:35:17.097705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:17.937 [2024-11-05 11:35:17.097723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:17.937 [2024-11-05 11:35:17.097816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.937 "name": "raid_bdev1", 00:19:17.937 "uuid": "a70fbe72-3396-43aa-b64e-a79081414f16", 00:19:17.937 "strip_size_kb": 0, 00:19:17.937 "state": "online", 00:19:17.937 "raid_level": "raid1", 00:19:17.937 "superblock": true, 00:19:17.937 "num_base_bdevs": 2, 00:19:17.937 "num_base_bdevs_discovered": 2, 00:19:17.937 "num_base_bdevs_operational": 2, 00:19:17.937 "base_bdevs_list": [ 00:19:17.937 { 00:19:17.937 "name": "pt1", 00:19:17.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:17.937 "is_configured": true, 00:19:17.937 "data_offset": 256, 00:19:17.937 "data_size": 7936 00:19:17.937 }, 00:19:17.937 { 00:19:17.937 "name": "pt2", 00:19:17.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:17.937 "is_configured": true, 00:19:17.937 "data_offset": 256, 00:19:17.937 "data_size": 7936 00:19:17.937 } 00:19:17.937 ] 00:19:17.937 }' 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.937 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.506 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:18.506 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:18.506 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.507 [2024-11-05 11:35:17.579164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:18.507 "name": "raid_bdev1", 00:19:18.507 "aliases": [ 00:19:18.507 "a70fbe72-3396-43aa-b64e-a79081414f16" 00:19:18.507 ], 00:19:18.507 "product_name": "Raid Volume", 00:19:18.507 "block_size": 4096, 00:19:18.507 "num_blocks": 7936, 00:19:18.507 "uuid": "a70fbe72-3396-43aa-b64e-a79081414f16", 00:19:18.507 "md_size": 32, 00:19:18.507 "md_interleave": false, 00:19:18.507 "dif_type": 0, 00:19:18.507 "assigned_rate_limits": { 00:19:18.507 "rw_ios_per_sec": 0, 00:19:18.507 "rw_mbytes_per_sec": 0, 00:19:18.507 "r_mbytes_per_sec": 0, 00:19:18.507 "w_mbytes_per_sec": 0 00:19:18.507 }, 00:19:18.507 "claimed": false, 00:19:18.507 "zoned": false, 00:19:18.507 "supported_io_types": { 00:19:18.507 "read": true, 00:19:18.507 "write": true, 00:19:18.507 "unmap": false, 00:19:18.507 "flush": false, 00:19:18.507 "reset": true, 00:19:18.507 "nvme_admin": false, 00:19:18.507 "nvme_io": false, 00:19:18.507 "nvme_io_md": false, 00:19:18.507 "write_zeroes": true, 00:19:18.507 "zcopy": false, 00:19:18.507 "get_zone_info": false, 00:19:18.507 "zone_management": false, 00:19:18.507 "zone_append": false, 00:19:18.507 "compare": false, 00:19:18.507 "compare_and_write": false, 00:19:18.507 "abort": false, 00:19:18.507 "seek_hole": false, 00:19:18.507 "seek_data": false, 00:19:18.507 "copy": false, 00:19:18.507 "nvme_iov_md": false 00:19:18.507 }, 00:19:18.507 "memory_domains": [ 00:19:18.507 { 00:19:18.507 "dma_device_id": "system", 00:19:18.507 "dma_device_type": 1 00:19:18.507 }, 00:19:18.507 { 00:19:18.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.507 "dma_device_type": 2 00:19:18.507 }, 00:19:18.507 { 00:19:18.507 "dma_device_id": "system", 00:19:18.507 "dma_device_type": 1 00:19:18.507 }, 00:19:18.507 { 00:19:18.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.507 "dma_device_type": 2 00:19:18.507 } 00:19:18.507 ], 00:19:18.507 "driver_specific": { 00:19:18.507 "raid": { 00:19:18.507 "uuid": "a70fbe72-3396-43aa-b64e-a79081414f16", 00:19:18.507 "strip_size_kb": 0, 00:19:18.507 "state": "online", 00:19:18.507 "raid_level": "raid1", 00:19:18.507 "superblock": true, 00:19:18.507 "num_base_bdevs": 2, 00:19:18.507 "num_base_bdevs_discovered": 2, 00:19:18.507 "num_base_bdevs_operational": 2, 00:19:18.507 "base_bdevs_list": [ 00:19:18.507 { 00:19:18.507 "name": "pt1", 00:19:18.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:18.507 "is_configured": true, 00:19:18.507 "data_offset": 256, 00:19:18.507 "data_size": 7936 00:19:18.507 }, 00:19:18.507 { 00:19:18.507 "name": "pt2", 00:19:18.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:18.507 "is_configured": true, 00:19:18.507 "data_offset": 256, 00:19:18.507 "data_size": 7936 00:19:18.507 } 00:19:18.507 ] 00:19:18.507 } 00:19:18.507 } 00:19:18.507 }' 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:18.507 pt2' 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:18.507 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:18.767 [2024-11-05 11:35:17.798720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a70fbe72-3396-43aa-b64e-a79081414f16 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z a70fbe72-3396-43aa-b64e-a79081414f16 ']' 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.767 [2024-11-05 11:35:17.846413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:18.767 [2024-11-05 11:35:17.846437] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:18.767 [2024-11-05 11:35:17.846504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:18.767 [2024-11-05 11:35:17.846548] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:18.767 [2024-11-05 11:35:17.846558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.767 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.768 [2024-11-05 11:35:17.986232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:18.768 [2024-11-05 11:35:17.988015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:18.768 [2024-11-05 11:35:17.988097] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:18.768 [2024-11-05 11:35:17.988153] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:18.768 [2024-11-05 11:35:17.988170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:18.768 [2024-11-05 11:35:17.988179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:18.768 request: 00:19:18.768 { 00:19:18.768 "name": "raid_bdev1", 00:19:18.768 "raid_level": "raid1", 00:19:18.768 "base_bdevs": [ 00:19:18.768 "malloc1", 00:19:18.768 "malloc2" 00:19:18.768 ], 00:19:18.768 "superblock": false, 00:19:18.768 "method": "bdev_raid_create", 00:19:18.768 "req_id": 1 00:19:18.768 } 00:19:18.768 Got JSON-RPC error response 00:19:18.768 response: 00:19:18.768 { 00:19:18.768 "code": -17, 00:19:18.768 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:18.768 } 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:18.768 11:35:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.768 [2024-11-05 11:35:18.034223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:18.768 [2024-11-05 11:35:18.034269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.768 [2024-11-05 11:35:18.034281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:18.768 [2024-11-05 11:35:18.034292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.768 [2024-11-05 11:35:18.036006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.768 [2024-11-05 11:35:18.036045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:18.768 [2024-11-05 11:35:18.036080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:18.768 [2024-11-05 11:35:18.036124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:18.768 pt1 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.768 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.028 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.028 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.028 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.028 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.028 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.028 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.028 "name": "raid_bdev1", 00:19:19.028 "uuid": "a70fbe72-3396-43aa-b64e-a79081414f16", 00:19:19.028 "strip_size_kb": 0, 00:19:19.028 "state": "configuring", 00:19:19.028 "raid_level": "raid1", 00:19:19.028 "superblock": true, 00:19:19.028 "num_base_bdevs": 2, 00:19:19.028 "num_base_bdevs_discovered": 1, 00:19:19.028 "num_base_bdevs_operational": 2, 00:19:19.028 "base_bdevs_list": [ 00:19:19.028 { 00:19:19.028 "name": "pt1", 00:19:19.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:19.028 "is_configured": true, 00:19:19.028 "data_offset": 256, 00:19:19.028 "data_size": 7936 00:19:19.028 }, 00:19:19.028 { 00:19:19.028 "name": null, 00:19:19.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.028 "is_configured": false, 00:19:19.028 "data_offset": 256, 00:19:19.028 "data_size": 7936 00:19:19.028 } 00:19:19.028 ] 00:19:19.028 }' 00:19:19.028 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.028 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.288 [2024-11-05 11:35:18.417543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:19.288 [2024-11-05 11:35:18.417597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.288 [2024-11-05 11:35:18.417614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:19.288 [2024-11-05 11:35:18.417623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.288 [2024-11-05 11:35:18.417764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.288 [2024-11-05 11:35:18.417787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:19.288 [2024-11-05 11:35:18.417821] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:19.288 [2024-11-05 11:35:18.417838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:19.288 [2024-11-05 11:35:18.417937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:19.288 [2024-11-05 11:35:18.417953] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:19.288 [2024-11-05 11:35:18.418009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:19.288 [2024-11-05 11:35:18.418115] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:19.288 [2024-11-05 11:35:18.418139] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:19.288 [2024-11-05 11:35:18.418215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.288 pt2 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.288 "name": "raid_bdev1", 00:19:19.288 "uuid": "a70fbe72-3396-43aa-b64e-a79081414f16", 00:19:19.288 "strip_size_kb": 0, 00:19:19.288 "state": "online", 00:19:19.288 "raid_level": "raid1", 00:19:19.288 "superblock": true, 00:19:19.288 "num_base_bdevs": 2, 00:19:19.288 "num_base_bdevs_discovered": 2, 00:19:19.288 "num_base_bdevs_operational": 2, 00:19:19.288 "base_bdevs_list": [ 00:19:19.288 { 00:19:19.288 "name": "pt1", 00:19:19.288 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:19.288 "is_configured": true, 00:19:19.288 "data_offset": 256, 00:19:19.288 "data_size": 7936 00:19:19.288 }, 00:19:19.288 { 00:19:19.288 "name": "pt2", 00:19:19.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.288 "is_configured": true, 00:19:19.288 "data_offset": 256, 00:19:19.288 "data_size": 7936 00:19:19.288 } 00:19:19.288 ] 00:19:19.288 }' 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.288 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.862 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:19.862 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:19.862 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:19.862 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:19.862 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:19.862 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:19.862 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:19.862 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:19.862 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.862 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.862 [2024-11-05 11:35:18.872980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:19.862 11:35:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.862 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:19.862 "name": "raid_bdev1", 00:19:19.862 "aliases": [ 00:19:19.862 "a70fbe72-3396-43aa-b64e-a79081414f16" 00:19:19.862 ], 00:19:19.862 "product_name": "Raid Volume", 00:19:19.862 "block_size": 4096, 00:19:19.862 "num_blocks": 7936, 00:19:19.862 "uuid": "a70fbe72-3396-43aa-b64e-a79081414f16", 00:19:19.862 "md_size": 32, 00:19:19.862 "md_interleave": false, 00:19:19.862 "dif_type": 0, 00:19:19.862 "assigned_rate_limits": { 00:19:19.862 "rw_ios_per_sec": 0, 00:19:19.862 "rw_mbytes_per_sec": 0, 00:19:19.862 "r_mbytes_per_sec": 0, 00:19:19.862 "w_mbytes_per_sec": 0 00:19:19.862 }, 00:19:19.862 "claimed": false, 00:19:19.862 "zoned": false, 00:19:19.862 "supported_io_types": { 00:19:19.862 "read": true, 00:19:19.862 "write": true, 00:19:19.862 "unmap": false, 00:19:19.862 "flush": false, 00:19:19.862 "reset": true, 00:19:19.863 "nvme_admin": false, 00:19:19.863 "nvme_io": false, 00:19:19.863 "nvme_io_md": false, 00:19:19.863 "write_zeroes": true, 00:19:19.863 "zcopy": false, 00:19:19.863 "get_zone_info": false, 00:19:19.863 "zone_management": false, 00:19:19.863 "zone_append": false, 00:19:19.863 "compare": false, 00:19:19.863 "compare_and_write": false, 00:19:19.863 "abort": false, 00:19:19.863 "seek_hole": false, 00:19:19.863 "seek_data": false, 00:19:19.863 "copy": false, 00:19:19.863 "nvme_iov_md": false 00:19:19.863 }, 00:19:19.863 "memory_domains": [ 00:19:19.863 { 00:19:19.863 "dma_device_id": "system", 00:19:19.863 "dma_device_type": 1 00:19:19.863 }, 00:19:19.863 { 00:19:19.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.863 "dma_device_type": 2 00:19:19.863 }, 00:19:19.863 { 00:19:19.863 "dma_device_id": "system", 00:19:19.863 "dma_device_type": 1 00:19:19.863 }, 00:19:19.863 { 00:19:19.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.863 "dma_device_type": 2 00:19:19.863 } 00:19:19.863 ], 00:19:19.863 "driver_specific": { 00:19:19.863 "raid": { 00:19:19.863 "uuid": "a70fbe72-3396-43aa-b64e-a79081414f16", 00:19:19.863 "strip_size_kb": 0, 00:19:19.863 "state": "online", 00:19:19.863 "raid_level": "raid1", 00:19:19.863 "superblock": true, 00:19:19.863 "num_base_bdevs": 2, 00:19:19.863 "num_base_bdevs_discovered": 2, 00:19:19.863 "num_base_bdevs_operational": 2, 00:19:19.863 "base_bdevs_list": [ 00:19:19.863 { 00:19:19.863 "name": "pt1", 00:19:19.863 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:19.863 "is_configured": true, 00:19:19.863 "data_offset": 256, 00:19:19.863 "data_size": 7936 00:19:19.863 }, 00:19:19.863 { 00:19:19.863 "name": "pt2", 00:19:19.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.863 "is_configured": true, 00:19:19.863 "data_offset": 256, 00:19:19.863 "data_size": 7936 00:19:19.863 } 00:19:19.863 ] 00:19:19.863 } 00:19:19.863 } 00:19:19.863 }' 00:19:19.863 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:19.863 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:19.863 pt2' 00:19:19.863 11:35:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.864 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.864 [2024-11-05 11:35:19.116559] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' a70fbe72-3396-43aa-b64e-a79081414f16 '!=' a70fbe72-3396-43aa-b64e-a79081414f16 ']' 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.127 [2024-11-05 11:35:19.148314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.127 "name": "raid_bdev1", 00:19:20.127 "uuid": "a70fbe72-3396-43aa-b64e-a79081414f16", 00:19:20.127 "strip_size_kb": 0, 00:19:20.127 "state": "online", 00:19:20.127 "raid_level": "raid1", 00:19:20.127 "superblock": true, 00:19:20.127 "num_base_bdevs": 2, 00:19:20.127 "num_base_bdevs_discovered": 1, 00:19:20.127 "num_base_bdevs_operational": 1, 00:19:20.127 "base_bdevs_list": [ 00:19:20.127 { 00:19:20.127 "name": null, 00:19:20.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.127 "is_configured": false, 00:19:20.127 "data_offset": 0, 00:19:20.127 "data_size": 7936 00:19:20.127 }, 00:19:20.127 { 00:19:20.127 "name": "pt2", 00:19:20.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.127 "is_configured": true, 00:19:20.127 "data_offset": 256, 00:19:20.127 "data_size": 7936 00:19:20.127 } 00:19:20.127 ] 00:19:20.127 }' 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.127 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.393 [2024-11-05 11:35:19.583604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.393 [2024-11-05 11:35:19.583630] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.393 [2024-11-05 11:35:19.583676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.393 [2024-11-05 11:35:19.583710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.393 [2024-11-05 11:35:19.583721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.393 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.393 [2024-11-05 11:35:19.659482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:20.393 [2024-11-05 11:35:19.659535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.393 [2024-11-05 11:35:19.659551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:20.393 [2024-11-05 11:35:19.659561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.393 [2024-11-05 11:35:19.661470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.393 [2024-11-05 11:35:19.661508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:20.393 [2024-11-05 11:35:19.661547] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:20.393 [2024-11-05 11:35:19.661602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:20.393 [2024-11-05 11:35:19.661685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:20.393 [2024-11-05 11:35:19.661698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:20.393 [2024-11-05 11:35:19.661763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:20.393 [2024-11-05 11:35:19.661866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:20.393 [2024-11-05 11:35:19.661882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:20.393 [2024-11-05 11:35:19.661988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.683 pt2 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.683 "name": "raid_bdev1", 00:19:20.683 "uuid": "a70fbe72-3396-43aa-b64e-a79081414f16", 00:19:20.683 "strip_size_kb": 0, 00:19:20.683 "state": "online", 00:19:20.683 "raid_level": "raid1", 00:19:20.683 "superblock": true, 00:19:20.683 "num_base_bdevs": 2, 00:19:20.683 "num_base_bdevs_discovered": 1, 00:19:20.683 "num_base_bdevs_operational": 1, 00:19:20.683 "base_bdevs_list": [ 00:19:20.683 { 00:19:20.683 "name": null, 00:19:20.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.683 "is_configured": false, 00:19:20.683 "data_offset": 256, 00:19:20.683 "data_size": 7936 00:19:20.683 }, 00:19:20.683 { 00:19:20.683 "name": "pt2", 00:19:20.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.683 "is_configured": true, 00:19:20.683 "data_offset": 256, 00:19:20.683 "data_size": 7936 00:19:20.683 } 00:19:20.683 ] 00:19:20.683 }' 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.683 11:35:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.943 [2024-11-05 11:35:20.094737] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.943 [2024-11-05 11:35:20.094763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.943 [2024-11-05 11:35:20.094808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.943 [2024-11-05 11:35:20.094845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.943 [2024-11-05 11:35:20.094853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.943 [2024-11-05 11:35:20.158661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:20.943 [2024-11-05 11:35:20.158778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.943 [2024-11-05 11:35:20.158817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:20.943 [2024-11-05 11:35:20.158863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.943 [2024-11-05 11:35:20.160745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.943 [2024-11-05 11:35:20.160783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:20.943 [2024-11-05 11:35:20.160826] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:20.943 [2024-11-05 11:35:20.160871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:20.943 [2024-11-05 11:35:20.160989] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:20.943 [2024-11-05 11:35:20.160999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.943 [2024-11-05 11:35:20.161013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:20.943 [2024-11-05 11:35:20.161073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:20.943 [2024-11-05 11:35:20.161150] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:20.943 [2024-11-05 11:35:20.161159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:20.943 [2024-11-05 11:35:20.161229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:20.943 [2024-11-05 11:35:20.161330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:20.943 [2024-11-05 11:35:20.161347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:20.943 [2024-11-05 11:35:20.161444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.943 pt1 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.943 "name": "raid_bdev1", 00:19:20.943 "uuid": "a70fbe72-3396-43aa-b64e-a79081414f16", 00:19:20.943 "strip_size_kb": 0, 00:19:20.943 "state": "online", 00:19:20.943 "raid_level": "raid1", 00:19:20.943 "superblock": true, 00:19:20.943 "num_base_bdevs": 2, 00:19:20.943 "num_base_bdevs_discovered": 1, 00:19:20.943 "num_base_bdevs_operational": 1, 00:19:20.943 "base_bdevs_list": [ 00:19:20.943 { 00:19:20.943 "name": null, 00:19:20.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.943 "is_configured": false, 00:19:20.943 "data_offset": 256, 00:19:20.943 "data_size": 7936 00:19:20.943 }, 00:19:20.943 { 00:19:20.943 "name": "pt2", 00:19:20.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.943 "is_configured": true, 00:19:20.943 "data_offset": 256, 00:19:20.943 "data_size": 7936 00:19:20.943 } 00:19:20.943 ] 00:19:20.943 }' 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.943 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:21.523 [2024-11-05 11:35:20.682036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' a70fbe72-3396-43aa-b64e-a79081414f16 '!=' a70fbe72-3396-43aa-b64e-a79081414f16 ']' 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87446 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87446 ']' 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 87446 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87446 00:19:21.523 killing process with pid 87446 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87446' 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 87446 00:19:21.523 [2024-11-05 11:35:20.753920] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:21.523 [2024-11-05 11:35:20.753976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.523 [2024-11-05 11:35:20.754007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:21.523 [2024-11-05 11:35:20.754020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:21.523 11:35:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 87446 00:19:21.783 [2024-11-05 11:35:20.962843] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:23.165 11:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:19:23.165 00:19:23.165 real 0m5.920s 00:19:23.165 user 0m8.946s 00:19:23.165 sys 0m1.120s 00:19:23.165 ************************************ 00:19:23.165 END TEST raid_superblock_test_md_separate 00:19:23.165 ************************************ 00:19:23.165 11:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:23.165 11:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.165 11:35:22 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:19:23.165 11:35:22 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:19:23.165 11:35:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:23.165 11:35:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:23.165 11:35:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:23.165 ************************************ 00:19:23.165 START TEST raid_rebuild_test_sb_md_separate 00:19:23.165 ************************************ 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87773 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87773 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87773 ']' 00:19:23.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:23.165 11:35:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.165 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:23.165 Zero copy mechanism will not be used. 00:19:23.165 [2024-11-05 11:35:22.189053] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:19:23.165 [2024-11-05 11:35:22.189177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87773 ] 00:19:23.165 [2024-11-05 11:35:22.363588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.425 [2024-11-05 11:35:22.471029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.425 [2024-11-05 11:35:22.651232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.425 [2024-11-05 11:35:22.651286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.995 BaseBdev1_malloc 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.995 [2024-11-05 11:35:23.056296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:23.995 [2024-11-05 11:35:23.056440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.995 [2024-11-05 11:35:23.056476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:23.995 [2024-11-05 11:35:23.056506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.995 [2024-11-05 11:35:23.058285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.995 [2024-11-05 11:35:23.058351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:23.995 BaseBdev1 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.995 BaseBdev2_malloc 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.995 [2024-11-05 11:35:23.105909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:23.995 [2024-11-05 11:35:23.106017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.995 [2024-11-05 11:35:23.106051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:23.995 [2024-11-05 11:35:23.106079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.995 [2024-11-05 11:35:23.107826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.995 [2024-11-05 11:35:23.107894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:23.995 BaseBdev2 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.995 spare_malloc 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.995 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.996 spare_delay 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.996 [2024-11-05 11:35:23.206913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:23.996 [2024-11-05 11:35:23.207024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.996 [2024-11-05 11:35:23.207061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:23.996 [2024-11-05 11:35:23.207090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.996 [2024-11-05 11:35:23.208847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.996 [2024-11-05 11:35:23.208917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:23.996 spare 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.996 [2024-11-05 11:35:23.218915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.996 [2024-11-05 11:35:23.220582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:23.996 [2024-11-05 11:35:23.220784] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:23.996 [2024-11-05 11:35:23.220802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:23.996 [2024-11-05 11:35:23.220868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:23.996 [2024-11-05 11:35:23.220986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:23.996 [2024-11-05 11:35:23.220993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:23.996 [2024-11-05 11:35:23.221098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.996 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.255 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.255 "name": "raid_bdev1", 00:19:24.255 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:24.255 "strip_size_kb": 0, 00:19:24.255 "state": "online", 00:19:24.255 "raid_level": "raid1", 00:19:24.255 "superblock": true, 00:19:24.255 "num_base_bdevs": 2, 00:19:24.255 "num_base_bdevs_discovered": 2, 00:19:24.255 "num_base_bdevs_operational": 2, 00:19:24.255 "base_bdevs_list": [ 00:19:24.255 { 00:19:24.255 "name": "BaseBdev1", 00:19:24.255 "uuid": "8a731a20-facb-576d-8474-b5e25ba1d437", 00:19:24.255 "is_configured": true, 00:19:24.255 "data_offset": 256, 00:19:24.255 "data_size": 7936 00:19:24.255 }, 00:19:24.255 { 00:19:24.255 "name": "BaseBdev2", 00:19:24.255 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:24.255 "is_configured": true, 00:19:24.255 "data_offset": 256, 00:19:24.255 "data_size": 7936 00:19:24.255 } 00:19:24.255 ] 00:19:24.255 }' 00:19:24.255 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.255 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.515 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:24.515 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.515 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.515 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:24.515 [2024-11-05 11:35:23.702322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.515 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.515 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:24.515 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.515 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.515 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.515 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:24.515 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.775 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:24.775 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:24.775 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:24.775 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:24.775 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:24.775 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:24.775 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:24.775 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:24.775 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:24.775 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:24.775 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:24.775 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:24.775 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:24.775 11:35:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:24.775 [2024-11-05 11:35:23.973638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:24.775 /dev/nbd0 00:19:24.775 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:24.775 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:24.775 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:24.775 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:19:24.775 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:24.775 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:24.775 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:24.775 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:19:24.775 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:24.775 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:24.775 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:24.775 1+0 records in 00:19:24.775 1+0 records out 00:19:24.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003507 s, 11.7 MB/s 00:19:24.775 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.775 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:19:24.775 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.035 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:25.035 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:19:25.035 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:25.035 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:25.035 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:25.035 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:25.035 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:25.604 7936+0 records in 00:19:25.604 7936+0 records out 00:19:25.604 32505856 bytes (33 MB, 31 MiB) copied, 0.633966 s, 51.3 MB/s 00:19:25.604 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:25.604 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:25.604 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:25.604 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:25.604 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:25.604 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.604 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:25.864 [2024-11-05 11:35:24.897220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:25.864 [2024-11-05 11:35:24.909832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.864 "name": "raid_bdev1", 00:19:25.864 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:25.864 "strip_size_kb": 0, 00:19:25.864 "state": "online", 00:19:25.864 "raid_level": "raid1", 00:19:25.864 "superblock": true, 00:19:25.864 "num_base_bdevs": 2, 00:19:25.864 "num_base_bdevs_discovered": 1, 00:19:25.864 "num_base_bdevs_operational": 1, 00:19:25.864 "base_bdevs_list": [ 00:19:25.864 { 00:19:25.864 "name": null, 00:19:25.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.864 "is_configured": false, 00:19:25.864 "data_offset": 0, 00:19:25.864 "data_size": 7936 00:19:25.864 }, 00:19:25.864 { 00:19:25.864 "name": "BaseBdev2", 00:19:25.864 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:25.864 "is_configured": true, 00:19:25.864 "data_offset": 256, 00:19:25.864 "data_size": 7936 00:19:25.864 } 00:19:25.864 ] 00:19:25.864 }' 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.864 11:35:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.124 11:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:26.124 11:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.124 11:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.124 [2024-11-05 11:35:25.349050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:26.124 [2024-11-05 11:35:25.363674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:26.124 11:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.124 11:35:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:26.124 [2024-11-05 11:35:25.365347] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:27.505 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.505 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.505 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.505 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.505 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.505 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.505 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.505 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.505 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.505 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.505 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.505 "name": "raid_bdev1", 00:19:27.505 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:27.505 "strip_size_kb": 0, 00:19:27.505 "state": "online", 00:19:27.505 "raid_level": "raid1", 00:19:27.505 "superblock": true, 00:19:27.505 "num_base_bdevs": 2, 00:19:27.505 "num_base_bdevs_discovered": 2, 00:19:27.505 "num_base_bdevs_operational": 2, 00:19:27.505 "process": { 00:19:27.505 "type": "rebuild", 00:19:27.505 "target": "spare", 00:19:27.505 "progress": { 00:19:27.505 "blocks": 2560, 00:19:27.505 "percent": 32 00:19:27.505 } 00:19:27.505 }, 00:19:27.505 "base_bdevs_list": [ 00:19:27.505 { 00:19:27.505 "name": "spare", 00:19:27.505 "uuid": "eae682d7-66e3-5f99-91cd-2cc44b631fb9", 00:19:27.505 "is_configured": true, 00:19:27.506 "data_offset": 256, 00:19:27.506 "data_size": 7936 00:19:27.506 }, 00:19:27.506 { 00:19:27.506 "name": "BaseBdev2", 00:19:27.506 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:27.506 "is_configured": true, 00:19:27.506 "data_offset": 256, 00:19:27.506 "data_size": 7936 00:19:27.506 } 00:19:27.506 ] 00:19:27.506 }' 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.506 [2024-11-05 11:35:26.525483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:27.506 [2024-11-05 11:35:26.569932] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:27.506 [2024-11-05 11:35:26.570027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.506 [2024-11-05 11:35:26.570042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:27.506 [2024-11-05 11:35:26.570051] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.506 "name": "raid_bdev1", 00:19:27.506 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:27.506 "strip_size_kb": 0, 00:19:27.506 "state": "online", 00:19:27.506 "raid_level": "raid1", 00:19:27.506 "superblock": true, 00:19:27.506 "num_base_bdevs": 2, 00:19:27.506 "num_base_bdevs_discovered": 1, 00:19:27.506 "num_base_bdevs_operational": 1, 00:19:27.506 "base_bdevs_list": [ 00:19:27.506 { 00:19:27.506 "name": null, 00:19:27.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.506 "is_configured": false, 00:19:27.506 "data_offset": 0, 00:19:27.506 "data_size": 7936 00:19:27.506 }, 00:19:27.506 { 00:19:27.506 "name": "BaseBdev2", 00:19:27.506 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:27.506 "is_configured": true, 00:19:27.506 "data_offset": 256, 00:19:27.506 "data_size": 7936 00:19:27.506 } 00:19:27.506 ] 00:19:27.506 }' 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.506 11:35:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.765 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:27.765 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.765 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:27.765 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:27.765 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.765 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.765 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.765 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.765 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.765 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.024 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.024 "name": "raid_bdev1", 00:19:28.024 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:28.024 "strip_size_kb": 0, 00:19:28.024 "state": "online", 00:19:28.024 "raid_level": "raid1", 00:19:28.024 "superblock": true, 00:19:28.024 "num_base_bdevs": 2, 00:19:28.024 "num_base_bdevs_discovered": 1, 00:19:28.024 "num_base_bdevs_operational": 1, 00:19:28.024 "base_bdevs_list": [ 00:19:28.024 { 00:19:28.024 "name": null, 00:19:28.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.024 "is_configured": false, 00:19:28.024 "data_offset": 0, 00:19:28.024 "data_size": 7936 00:19:28.024 }, 00:19:28.024 { 00:19:28.024 "name": "BaseBdev2", 00:19:28.024 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:28.024 "is_configured": true, 00:19:28.024 "data_offset": 256, 00:19:28.024 "data_size": 7936 00:19:28.024 } 00:19:28.024 ] 00:19:28.024 }' 00:19:28.024 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.024 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:28.024 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.024 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:28.024 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:28.025 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.025 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.025 [2024-11-05 11:35:27.164110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:28.025 [2024-11-05 11:35:27.176985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:28.025 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.025 11:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:28.025 [2024-11-05 11:35:27.178700] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:28.964 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:28.964 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.964 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:28.964 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:28.964 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.964 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.964 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.964 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.964 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.964 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.964 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.964 "name": "raid_bdev1", 00:19:28.964 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:28.964 "strip_size_kb": 0, 00:19:28.964 "state": "online", 00:19:28.964 "raid_level": "raid1", 00:19:28.964 "superblock": true, 00:19:28.964 "num_base_bdevs": 2, 00:19:28.964 "num_base_bdevs_discovered": 2, 00:19:28.964 "num_base_bdevs_operational": 2, 00:19:28.964 "process": { 00:19:28.964 "type": "rebuild", 00:19:28.964 "target": "spare", 00:19:28.964 "progress": { 00:19:28.964 "blocks": 2560, 00:19:28.964 "percent": 32 00:19:28.964 } 00:19:28.964 }, 00:19:28.964 "base_bdevs_list": [ 00:19:28.964 { 00:19:28.964 "name": "spare", 00:19:28.964 "uuid": "eae682d7-66e3-5f99-91cd-2cc44b631fb9", 00:19:28.964 "is_configured": true, 00:19:28.964 "data_offset": 256, 00:19:28.964 "data_size": 7936 00:19:28.964 }, 00:19:28.964 { 00:19:28.964 "name": "BaseBdev2", 00:19:28.964 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:28.964 "is_configured": true, 00:19:28.964 "data_offset": 256, 00:19:28.964 "data_size": 7936 00:19:28.964 } 00:19:28.964 ] 00:19:28.964 }' 00:19:28.964 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:29.224 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=698 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.224 "name": "raid_bdev1", 00:19:29.224 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:29.224 "strip_size_kb": 0, 00:19:29.224 "state": "online", 00:19:29.224 "raid_level": "raid1", 00:19:29.224 "superblock": true, 00:19:29.224 "num_base_bdevs": 2, 00:19:29.224 "num_base_bdevs_discovered": 2, 00:19:29.224 "num_base_bdevs_operational": 2, 00:19:29.224 "process": { 00:19:29.224 "type": "rebuild", 00:19:29.224 "target": "spare", 00:19:29.224 "progress": { 00:19:29.224 "blocks": 2816, 00:19:29.224 "percent": 35 00:19:29.224 } 00:19:29.224 }, 00:19:29.224 "base_bdevs_list": [ 00:19:29.224 { 00:19:29.224 "name": "spare", 00:19:29.224 "uuid": "eae682d7-66e3-5f99-91cd-2cc44b631fb9", 00:19:29.224 "is_configured": true, 00:19:29.224 "data_offset": 256, 00:19:29.224 "data_size": 7936 00:19:29.224 }, 00:19:29.224 { 00:19:29.224 "name": "BaseBdev2", 00:19:29.224 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:29.224 "is_configured": true, 00:19:29.224 "data_offset": 256, 00:19:29.224 "data_size": 7936 00:19:29.224 } 00:19:29.224 ] 00:19:29.224 }' 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:29.224 11:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.610 "name": "raid_bdev1", 00:19:30.610 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:30.610 "strip_size_kb": 0, 00:19:30.610 "state": "online", 00:19:30.610 "raid_level": "raid1", 00:19:30.610 "superblock": true, 00:19:30.610 "num_base_bdevs": 2, 00:19:30.610 "num_base_bdevs_discovered": 2, 00:19:30.610 "num_base_bdevs_operational": 2, 00:19:30.610 "process": { 00:19:30.610 "type": "rebuild", 00:19:30.610 "target": "spare", 00:19:30.610 "progress": { 00:19:30.610 "blocks": 5632, 00:19:30.610 "percent": 70 00:19:30.610 } 00:19:30.610 }, 00:19:30.610 "base_bdevs_list": [ 00:19:30.610 { 00:19:30.610 "name": "spare", 00:19:30.610 "uuid": "eae682d7-66e3-5f99-91cd-2cc44b631fb9", 00:19:30.610 "is_configured": true, 00:19:30.610 "data_offset": 256, 00:19:30.610 "data_size": 7936 00:19:30.610 }, 00:19:30.610 { 00:19:30.610 "name": "BaseBdev2", 00:19:30.610 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:30.610 "is_configured": true, 00:19:30.610 "data_offset": 256, 00:19:30.610 "data_size": 7936 00:19:30.610 } 00:19:30.610 ] 00:19:30.610 }' 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.610 11:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:31.180 [2024-11-05 11:35:30.290227] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:31.180 [2024-11-05 11:35:30.290374] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:31.180 [2024-11-05 11:35:30.290468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.438 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:31.438 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.438 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.438 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:31.438 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:31.438 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.438 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.438 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.438 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.438 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.438 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.438 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.438 "name": "raid_bdev1", 00:19:31.438 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:31.438 "strip_size_kb": 0, 00:19:31.438 "state": "online", 00:19:31.438 "raid_level": "raid1", 00:19:31.438 "superblock": true, 00:19:31.438 "num_base_bdevs": 2, 00:19:31.438 "num_base_bdevs_discovered": 2, 00:19:31.438 "num_base_bdevs_operational": 2, 00:19:31.438 "base_bdevs_list": [ 00:19:31.438 { 00:19:31.438 "name": "spare", 00:19:31.438 "uuid": "eae682d7-66e3-5f99-91cd-2cc44b631fb9", 00:19:31.438 "is_configured": true, 00:19:31.438 "data_offset": 256, 00:19:31.438 "data_size": 7936 00:19:31.438 }, 00:19:31.438 { 00:19:31.438 "name": "BaseBdev2", 00:19:31.438 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:31.438 "is_configured": true, 00:19:31.438 "data_offset": 256, 00:19:31.438 "data_size": 7936 00:19:31.438 } 00:19:31.438 ] 00:19:31.439 }' 00:19:31.439 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.439 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:31.439 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.698 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:31.698 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:19:31.698 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.699 "name": "raid_bdev1", 00:19:31.699 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:31.699 "strip_size_kb": 0, 00:19:31.699 "state": "online", 00:19:31.699 "raid_level": "raid1", 00:19:31.699 "superblock": true, 00:19:31.699 "num_base_bdevs": 2, 00:19:31.699 "num_base_bdevs_discovered": 2, 00:19:31.699 "num_base_bdevs_operational": 2, 00:19:31.699 "base_bdevs_list": [ 00:19:31.699 { 00:19:31.699 "name": "spare", 00:19:31.699 "uuid": "eae682d7-66e3-5f99-91cd-2cc44b631fb9", 00:19:31.699 "is_configured": true, 00:19:31.699 "data_offset": 256, 00:19:31.699 "data_size": 7936 00:19:31.699 }, 00:19:31.699 { 00:19:31.699 "name": "BaseBdev2", 00:19:31.699 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:31.699 "is_configured": true, 00:19:31.699 "data_offset": 256, 00:19:31.699 "data_size": 7936 00:19:31.699 } 00:19:31.699 ] 00:19:31.699 }' 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.699 "name": "raid_bdev1", 00:19:31.699 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:31.699 "strip_size_kb": 0, 00:19:31.699 "state": "online", 00:19:31.699 "raid_level": "raid1", 00:19:31.699 "superblock": true, 00:19:31.699 "num_base_bdevs": 2, 00:19:31.699 "num_base_bdevs_discovered": 2, 00:19:31.699 "num_base_bdevs_operational": 2, 00:19:31.699 "base_bdevs_list": [ 00:19:31.699 { 00:19:31.699 "name": "spare", 00:19:31.699 "uuid": "eae682d7-66e3-5f99-91cd-2cc44b631fb9", 00:19:31.699 "is_configured": true, 00:19:31.699 "data_offset": 256, 00:19:31.699 "data_size": 7936 00:19:31.699 }, 00:19:31.699 { 00:19:31.699 "name": "BaseBdev2", 00:19:31.699 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:31.699 "is_configured": true, 00:19:31.699 "data_offset": 256, 00:19:31.699 "data_size": 7936 00:19:31.699 } 00:19:31.699 ] 00:19:31.699 }' 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.699 11:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.959 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:31.959 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.219 [2024-11-05 11:35:31.239942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:32.219 [2024-11-05 11:35:31.240021] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:32.219 [2024-11-05 11:35:31.240150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.219 [2024-11-05 11:35:31.240231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:32.219 [2024-11-05 11:35:31.240284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:32.219 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:32.479 /dev/nbd0 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:32.479 1+0 records in 00:19:32.479 1+0 records out 00:19:32.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299003 s, 13.7 MB/s 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:32.479 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:32.480 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:32.480 /dev/nbd1 00:19:32.739 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:32.739 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:32.739 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:32.739 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:19:32.739 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:32.739 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:32.739 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:32.739 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:32.740 1+0 records in 00:19:32.740 1+0 records out 00:19:32.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459043 s, 8.9 MB/s 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:32.740 11:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:32.999 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:32.999 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:32.999 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:32.999 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.999 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.999 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:32.999 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:32.999 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.999 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:32.999 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.259 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.259 [2024-11-05 11:35:32.416922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:33.259 [2024-11-05 11:35:32.417021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.259 [2024-11-05 11:35:32.417058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:33.259 [2024-11-05 11:35:32.417084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.259 [2024-11-05 11:35:32.418870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.259 [2024-11-05 11:35:32.418939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:33.259 [2024-11-05 11:35:32.419009] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:33.260 [2024-11-05 11:35:32.419084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:33.260 [2024-11-05 11:35:32.419343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:33.260 spare 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.260 [2024-11-05 11:35:32.519255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:33.260 [2024-11-05 11:35:32.519324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:33.260 [2024-11-05 11:35:32.519425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:33.260 [2024-11-05 11:35:32.519561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:33.260 [2024-11-05 11:35:32.519597] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:33.260 [2024-11-05 11:35:32.519726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.260 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.519 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.519 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.519 "name": "raid_bdev1", 00:19:33.519 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:33.519 "strip_size_kb": 0, 00:19:33.519 "state": "online", 00:19:33.519 "raid_level": "raid1", 00:19:33.519 "superblock": true, 00:19:33.519 "num_base_bdevs": 2, 00:19:33.519 "num_base_bdevs_discovered": 2, 00:19:33.519 "num_base_bdevs_operational": 2, 00:19:33.519 "base_bdevs_list": [ 00:19:33.519 { 00:19:33.519 "name": "spare", 00:19:33.519 "uuid": "eae682d7-66e3-5f99-91cd-2cc44b631fb9", 00:19:33.519 "is_configured": true, 00:19:33.519 "data_offset": 256, 00:19:33.519 "data_size": 7936 00:19:33.519 }, 00:19:33.519 { 00:19:33.519 "name": "BaseBdev2", 00:19:33.519 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:33.519 "is_configured": true, 00:19:33.519 "data_offset": 256, 00:19:33.519 "data_size": 7936 00:19:33.519 } 00:19:33.519 ] 00:19:33.519 }' 00:19:33.519 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.520 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.779 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:33.779 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.779 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:33.779 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:33.779 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.779 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.779 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.779 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.779 11:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.779 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.779 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.779 "name": "raid_bdev1", 00:19:33.780 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:33.780 "strip_size_kb": 0, 00:19:33.780 "state": "online", 00:19:33.780 "raid_level": "raid1", 00:19:33.780 "superblock": true, 00:19:33.780 "num_base_bdevs": 2, 00:19:33.780 "num_base_bdevs_discovered": 2, 00:19:33.780 "num_base_bdevs_operational": 2, 00:19:33.780 "base_bdevs_list": [ 00:19:33.780 { 00:19:33.780 "name": "spare", 00:19:33.780 "uuid": "eae682d7-66e3-5f99-91cd-2cc44b631fb9", 00:19:33.780 "is_configured": true, 00:19:33.780 "data_offset": 256, 00:19:33.780 "data_size": 7936 00:19:33.780 }, 00:19:33.780 { 00:19:33.780 "name": "BaseBdev2", 00:19:33.780 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:33.780 "is_configured": true, 00:19:33.780 "data_offset": 256, 00:19:33.780 "data_size": 7936 00:19:33.780 } 00:19:33.780 ] 00:19:33.780 }' 00:19:33.780 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.039 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:34.039 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.039 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:34.039 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.039 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:34.039 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.039 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.040 [2024-11-05 11:35:33.151684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.040 "name": "raid_bdev1", 00:19:34.040 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:34.040 "strip_size_kb": 0, 00:19:34.040 "state": "online", 00:19:34.040 "raid_level": "raid1", 00:19:34.040 "superblock": true, 00:19:34.040 "num_base_bdevs": 2, 00:19:34.040 "num_base_bdevs_discovered": 1, 00:19:34.040 "num_base_bdevs_operational": 1, 00:19:34.040 "base_bdevs_list": [ 00:19:34.040 { 00:19:34.040 "name": null, 00:19:34.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.040 "is_configured": false, 00:19:34.040 "data_offset": 0, 00:19:34.040 "data_size": 7936 00:19:34.040 }, 00:19:34.040 { 00:19:34.040 "name": "BaseBdev2", 00:19:34.040 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:34.040 "is_configured": true, 00:19:34.040 "data_offset": 256, 00:19:34.040 "data_size": 7936 00:19:34.040 } 00:19:34.040 ] 00:19:34.040 }' 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.040 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.610 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:34.610 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.610 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.610 [2024-11-05 11:35:33.610978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:34.610 [2024-11-05 11:35:33.611112] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:34.610 [2024-11-05 11:35:33.611141] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:34.610 [2024-11-05 11:35:33.611175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:34.610 [2024-11-05 11:35:33.624422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:34.610 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.610 11:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:34.610 [2024-11-05 11:35:33.626208] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.549 "name": "raid_bdev1", 00:19:35.549 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:35.549 "strip_size_kb": 0, 00:19:35.549 "state": "online", 00:19:35.549 "raid_level": "raid1", 00:19:35.549 "superblock": true, 00:19:35.549 "num_base_bdevs": 2, 00:19:35.549 "num_base_bdevs_discovered": 2, 00:19:35.549 "num_base_bdevs_operational": 2, 00:19:35.549 "process": { 00:19:35.549 "type": "rebuild", 00:19:35.549 "target": "spare", 00:19:35.549 "progress": { 00:19:35.549 "blocks": 2560, 00:19:35.549 "percent": 32 00:19:35.549 } 00:19:35.549 }, 00:19:35.549 "base_bdevs_list": [ 00:19:35.549 { 00:19:35.549 "name": "spare", 00:19:35.549 "uuid": "eae682d7-66e3-5f99-91cd-2cc44b631fb9", 00:19:35.549 "is_configured": true, 00:19:35.549 "data_offset": 256, 00:19:35.549 "data_size": 7936 00:19:35.549 }, 00:19:35.549 { 00:19:35.549 "name": "BaseBdev2", 00:19:35.549 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:35.549 "is_configured": true, 00:19:35.549 "data_offset": 256, 00:19:35.549 "data_size": 7936 00:19:35.549 } 00:19:35.549 ] 00:19:35.549 }' 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.549 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.549 [2024-11-05 11:35:34.786601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:35.809 [2024-11-05 11:35:34.830971] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:35.809 [2024-11-05 11:35:34.831028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.809 [2024-11-05 11:35:34.831041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:35.809 [2024-11-05 11:35:34.831061] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:35.809 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.809 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:35.809 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.809 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.809 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.809 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.809 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.809 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.809 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.809 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.809 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.809 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.809 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.809 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.809 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.810 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.810 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.810 "name": "raid_bdev1", 00:19:35.810 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:35.810 "strip_size_kb": 0, 00:19:35.810 "state": "online", 00:19:35.810 "raid_level": "raid1", 00:19:35.810 "superblock": true, 00:19:35.810 "num_base_bdevs": 2, 00:19:35.810 "num_base_bdevs_discovered": 1, 00:19:35.810 "num_base_bdevs_operational": 1, 00:19:35.810 "base_bdevs_list": [ 00:19:35.810 { 00:19:35.810 "name": null, 00:19:35.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.810 "is_configured": false, 00:19:35.810 "data_offset": 0, 00:19:35.810 "data_size": 7936 00:19:35.810 }, 00:19:35.810 { 00:19:35.810 "name": "BaseBdev2", 00:19:35.810 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:35.810 "is_configured": true, 00:19:35.810 "data_offset": 256, 00:19:35.810 "data_size": 7936 00:19:35.810 } 00:19:35.810 ] 00:19:35.810 }' 00:19:35.810 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.810 11:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.069 11:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:36.069 11:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.069 11:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.069 [2024-11-05 11:35:35.309410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:36.069 [2024-11-05 11:35:35.309519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.069 [2024-11-05 11:35:35.309559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:36.069 [2024-11-05 11:35:35.309588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.069 [2024-11-05 11:35:35.309829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.069 [2024-11-05 11:35:35.309882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:36.069 [2024-11-05 11:35:35.309999] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:36.069 [2024-11-05 11:35:35.310046] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:36.069 [2024-11-05 11:35:35.310086] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:36.069 [2024-11-05 11:35:35.310147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:36.069 [2024-11-05 11:35:35.322855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:36.069 spare 00:19:36.069 11:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.069 [2024-11-05 11:35:35.324594] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:36.069 11:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.451 "name": "raid_bdev1", 00:19:37.451 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:37.451 "strip_size_kb": 0, 00:19:37.451 "state": "online", 00:19:37.451 "raid_level": "raid1", 00:19:37.451 "superblock": true, 00:19:37.451 "num_base_bdevs": 2, 00:19:37.451 "num_base_bdevs_discovered": 2, 00:19:37.451 "num_base_bdevs_operational": 2, 00:19:37.451 "process": { 00:19:37.451 "type": "rebuild", 00:19:37.451 "target": "spare", 00:19:37.451 "progress": { 00:19:37.451 "blocks": 2560, 00:19:37.451 "percent": 32 00:19:37.451 } 00:19:37.451 }, 00:19:37.451 "base_bdevs_list": [ 00:19:37.451 { 00:19:37.451 "name": "spare", 00:19:37.451 "uuid": "eae682d7-66e3-5f99-91cd-2cc44b631fb9", 00:19:37.451 "is_configured": true, 00:19:37.451 "data_offset": 256, 00:19:37.451 "data_size": 7936 00:19:37.451 }, 00:19:37.451 { 00:19:37.451 "name": "BaseBdev2", 00:19:37.451 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:37.451 "is_configured": true, 00:19:37.451 "data_offset": 256, 00:19:37.451 "data_size": 7936 00:19:37.451 } 00:19:37.451 ] 00:19:37.451 }' 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.451 [2024-11-05 11:35:36.480926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:37.451 [2024-11-05 11:35:36.529316] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:37.451 [2024-11-05 11:35:36.529420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.451 [2024-11-05 11:35:36.529456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:37.451 [2024-11-05 11:35:36.529477] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.451 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.451 "name": "raid_bdev1", 00:19:37.451 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:37.451 "strip_size_kb": 0, 00:19:37.451 "state": "online", 00:19:37.451 "raid_level": "raid1", 00:19:37.451 "superblock": true, 00:19:37.451 "num_base_bdevs": 2, 00:19:37.451 "num_base_bdevs_discovered": 1, 00:19:37.451 "num_base_bdevs_operational": 1, 00:19:37.451 "base_bdevs_list": [ 00:19:37.451 { 00:19:37.451 "name": null, 00:19:37.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.451 "is_configured": false, 00:19:37.451 "data_offset": 0, 00:19:37.451 "data_size": 7936 00:19:37.451 }, 00:19:37.451 { 00:19:37.451 "name": "BaseBdev2", 00:19:37.451 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:37.452 "is_configured": true, 00:19:37.452 "data_offset": 256, 00:19:37.452 "data_size": 7936 00:19:37.452 } 00:19:37.452 ] 00:19:37.452 }' 00:19:37.452 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.452 11:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.021 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:38.021 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.021 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:38.021 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:38.021 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.021 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.021 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.021 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.021 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.021 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.021 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.021 "name": "raid_bdev1", 00:19:38.021 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:38.021 "strip_size_kb": 0, 00:19:38.022 "state": "online", 00:19:38.022 "raid_level": "raid1", 00:19:38.022 "superblock": true, 00:19:38.022 "num_base_bdevs": 2, 00:19:38.022 "num_base_bdevs_discovered": 1, 00:19:38.022 "num_base_bdevs_operational": 1, 00:19:38.022 "base_bdevs_list": [ 00:19:38.022 { 00:19:38.022 "name": null, 00:19:38.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.022 "is_configured": false, 00:19:38.022 "data_offset": 0, 00:19:38.022 "data_size": 7936 00:19:38.022 }, 00:19:38.022 { 00:19:38.022 "name": "BaseBdev2", 00:19:38.022 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:38.022 "is_configured": true, 00:19:38.022 "data_offset": 256, 00:19:38.022 "data_size": 7936 00:19:38.022 } 00:19:38.022 ] 00:19:38.022 }' 00:19:38.022 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.022 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:38.022 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.022 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:38.022 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:38.022 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.022 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.022 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.022 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:38.022 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.022 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.022 [2024-11-05 11:35:37.192016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:38.022 [2024-11-05 11:35:37.192105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.022 [2024-11-05 11:35:37.192152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:38.022 [2024-11-05 11:35:37.192180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.022 [2024-11-05 11:35:37.192400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.022 [2024-11-05 11:35:37.192450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:38.022 [2024-11-05 11:35:37.192519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:38.022 [2024-11-05 11:35:37.192559] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:38.022 [2024-11-05 11:35:37.192582] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:38.022 [2024-11-05 11:35:37.192589] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:38.022 BaseBdev1 00:19:38.022 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.022 11:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:38.961 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:38.961 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.961 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.961 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.961 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.961 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:38.961 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.961 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.961 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.961 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.961 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.961 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.961 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.961 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.961 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.221 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.221 "name": "raid_bdev1", 00:19:39.221 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:39.221 "strip_size_kb": 0, 00:19:39.221 "state": "online", 00:19:39.221 "raid_level": "raid1", 00:19:39.221 "superblock": true, 00:19:39.221 "num_base_bdevs": 2, 00:19:39.221 "num_base_bdevs_discovered": 1, 00:19:39.221 "num_base_bdevs_operational": 1, 00:19:39.221 "base_bdevs_list": [ 00:19:39.221 { 00:19:39.221 "name": null, 00:19:39.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.221 "is_configured": false, 00:19:39.221 "data_offset": 0, 00:19:39.221 "data_size": 7936 00:19:39.221 }, 00:19:39.221 { 00:19:39.221 "name": "BaseBdev2", 00:19:39.221 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:39.221 "is_configured": true, 00:19:39.221 "data_offset": 256, 00:19:39.221 "data_size": 7936 00:19:39.221 } 00:19:39.221 ] 00:19:39.221 }' 00:19:39.221 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.221 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.480 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:39.480 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.480 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:39.480 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:39.480 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.480 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.480 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.480 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.480 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.481 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.481 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.481 "name": "raid_bdev1", 00:19:39.481 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:39.481 "strip_size_kb": 0, 00:19:39.481 "state": "online", 00:19:39.481 "raid_level": "raid1", 00:19:39.481 "superblock": true, 00:19:39.481 "num_base_bdevs": 2, 00:19:39.481 "num_base_bdevs_discovered": 1, 00:19:39.481 "num_base_bdevs_operational": 1, 00:19:39.481 "base_bdevs_list": [ 00:19:39.481 { 00:19:39.481 "name": null, 00:19:39.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.481 "is_configured": false, 00:19:39.481 "data_offset": 0, 00:19:39.481 "data_size": 7936 00:19:39.481 }, 00:19:39.481 { 00:19:39.481 "name": "BaseBdev2", 00:19:39.481 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:39.481 "is_configured": true, 00:19:39.481 "data_offset": 256, 00:19:39.481 "data_size": 7936 00:19:39.481 } 00:19:39.481 ] 00:19:39.481 }' 00:19:39.481 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.481 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:39.481 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.741 [2024-11-05 11:35:38.773323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:39.741 [2024-11-05 11:35:38.773537] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:39.741 [2024-11-05 11:35:38.773592] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:39.741 request: 00:19:39.741 { 00:19:39.741 "base_bdev": "BaseBdev1", 00:19:39.741 "raid_bdev": "raid_bdev1", 00:19:39.741 "method": "bdev_raid_add_base_bdev", 00:19:39.741 "req_id": 1 00:19:39.741 } 00:19:39.741 Got JSON-RPC error response 00:19:39.741 response: 00:19:39.741 { 00:19:39.741 "code": -22, 00:19:39.741 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:39.741 } 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:39.741 11:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.679 "name": "raid_bdev1", 00:19:40.679 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:40.679 "strip_size_kb": 0, 00:19:40.679 "state": "online", 00:19:40.679 "raid_level": "raid1", 00:19:40.679 "superblock": true, 00:19:40.679 "num_base_bdevs": 2, 00:19:40.679 "num_base_bdevs_discovered": 1, 00:19:40.679 "num_base_bdevs_operational": 1, 00:19:40.679 "base_bdevs_list": [ 00:19:40.679 { 00:19:40.679 "name": null, 00:19:40.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.679 "is_configured": false, 00:19:40.679 "data_offset": 0, 00:19:40.679 "data_size": 7936 00:19:40.679 }, 00:19:40.679 { 00:19:40.679 "name": "BaseBdev2", 00:19:40.679 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:40.679 "is_configured": true, 00:19:40.679 "data_offset": 256, 00:19:40.679 "data_size": 7936 00:19:40.679 } 00:19:40.679 ] 00:19:40.679 }' 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.679 11:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.249 "name": "raid_bdev1", 00:19:41.249 "uuid": "4f396f4d-cdf1-48c5-8cfd-ced198e3c56e", 00:19:41.249 "strip_size_kb": 0, 00:19:41.249 "state": "online", 00:19:41.249 "raid_level": "raid1", 00:19:41.249 "superblock": true, 00:19:41.249 "num_base_bdevs": 2, 00:19:41.249 "num_base_bdevs_discovered": 1, 00:19:41.249 "num_base_bdevs_operational": 1, 00:19:41.249 "base_bdevs_list": [ 00:19:41.249 { 00:19:41.249 "name": null, 00:19:41.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.249 "is_configured": false, 00:19:41.249 "data_offset": 0, 00:19:41.249 "data_size": 7936 00:19:41.249 }, 00:19:41.249 { 00:19:41.249 "name": "BaseBdev2", 00:19:41.249 "uuid": "d4f4bd5f-2c94-5109-8e5a-b92982700501", 00:19:41.249 "is_configured": true, 00:19:41.249 "data_offset": 256, 00:19:41.249 "data_size": 7936 00:19:41.249 } 00:19:41.249 ] 00:19:41.249 }' 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87773 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87773 ']' 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87773 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87773 00:19:41.249 killing process with pid 87773 00:19:41.249 Received shutdown signal, test time was about 60.000000 seconds 00:19:41.249 00:19:41.249 Latency(us) 00:19:41.249 [2024-11-05T11:35:40.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.249 [2024-11-05T11:35:40.523Z] =================================================================================================================== 00:19:41.249 [2024-11-05T11:35:40.523Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87773' 00:19:41.249 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87773 00:19:41.249 [2024-11-05 11:35:40.405578] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:41.249 [2024-11-05 11:35:40.405694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.249 [2024-11-05 11:35:40.405739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.249 [2024-11-05 11:35:40.405749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, sta 11:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87773 00:19:41.249 te offline 00:19:41.509 [2024-11-05 11:35:40.709586] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:42.449 11:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:19:42.449 00:19:42.449 real 0m19.632s 00:19:42.449 user 0m25.658s 00:19:42.449 sys 0m2.632s 00:19:42.449 11:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:42.449 ************************************ 00:19:42.449 END TEST raid_rebuild_test_sb_md_separate 00:19:42.449 ************************************ 00:19:42.449 11:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.709 11:35:41 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:19:42.709 11:35:41 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:19:42.709 11:35:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:42.709 11:35:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:42.709 11:35:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.709 ************************************ 00:19:42.709 START TEST raid_state_function_test_sb_md_interleaved 00:19:42.709 ************************************ 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88468 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88468' 00:19:42.709 Process raid pid: 88468 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88468 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88468 ']' 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:42.709 11:35:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.710 [2024-11-05 11:35:41.902626] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:19:42.710 [2024-11-05 11:35:41.902762] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.969 [2024-11-05 11:35:42.082390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.970 [2024-11-05 11:35:42.188835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.229 [2024-11-05 11:35:42.389875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.229 [2024-11-05 11:35:42.389911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.489 [2024-11-05 11:35:42.715826] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:43.489 [2024-11-05 11:35:42.715950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:43.489 [2024-11-05 11:35:42.715977] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:43.489 [2024-11-05 11:35:42.716000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.489 "name": "Existed_Raid", 00:19:43.489 "uuid": "eb340ab1-f747-4a95-8a6f-ef7c3c8eed61", 00:19:43.489 "strip_size_kb": 0, 00:19:43.489 "state": "configuring", 00:19:43.489 "raid_level": "raid1", 00:19:43.489 "superblock": true, 00:19:43.489 "num_base_bdevs": 2, 00:19:43.489 "num_base_bdevs_discovered": 0, 00:19:43.489 "num_base_bdevs_operational": 2, 00:19:43.489 "base_bdevs_list": [ 00:19:43.489 { 00:19:43.489 "name": "BaseBdev1", 00:19:43.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.489 "is_configured": false, 00:19:43.489 "data_offset": 0, 00:19:43.489 "data_size": 0 00:19:43.489 }, 00:19:43.489 { 00:19:43.489 "name": "BaseBdev2", 00:19:43.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.489 "is_configured": false, 00:19:43.489 "data_offset": 0, 00:19:43.489 "data_size": 0 00:19:43.489 } 00:19:43.489 ] 00:19:43.489 }' 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.489 11:35:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.059 [2024-11-05 11:35:43.095116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:44.059 [2024-11-05 11:35:43.095208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.059 [2024-11-05 11:35:43.107096] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:44.059 [2024-11-05 11:35:43.107189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:44.059 [2024-11-05 11:35:43.107238] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:44.059 [2024-11-05 11:35:43.107262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.059 [2024-11-05 11:35:43.151173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:44.059 BaseBdev1 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.059 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.060 [ 00:19:44.060 { 00:19:44.060 "name": "BaseBdev1", 00:19:44.060 "aliases": [ 00:19:44.060 "f597993b-09bf-4f8b-9c33-543045ba4e58" 00:19:44.060 ], 00:19:44.060 "product_name": "Malloc disk", 00:19:44.060 "block_size": 4128, 00:19:44.060 "num_blocks": 8192, 00:19:44.060 "uuid": "f597993b-09bf-4f8b-9c33-543045ba4e58", 00:19:44.060 "md_size": 32, 00:19:44.060 "md_interleave": true, 00:19:44.060 "dif_type": 0, 00:19:44.060 "assigned_rate_limits": { 00:19:44.060 "rw_ios_per_sec": 0, 00:19:44.060 "rw_mbytes_per_sec": 0, 00:19:44.060 "r_mbytes_per_sec": 0, 00:19:44.060 "w_mbytes_per_sec": 0 00:19:44.060 }, 00:19:44.060 "claimed": true, 00:19:44.060 "claim_type": "exclusive_write", 00:19:44.060 "zoned": false, 00:19:44.060 "supported_io_types": { 00:19:44.060 "read": true, 00:19:44.060 "write": true, 00:19:44.060 "unmap": true, 00:19:44.060 "flush": true, 00:19:44.060 "reset": true, 00:19:44.060 "nvme_admin": false, 00:19:44.060 "nvme_io": false, 00:19:44.060 "nvme_io_md": false, 00:19:44.060 "write_zeroes": true, 00:19:44.060 "zcopy": true, 00:19:44.060 "get_zone_info": false, 00:19:44.060 "zone_management": false, 00:19:44.060 "zone_append": false, 00:19:44.060 "compare": false, 00:19:44.060 "compare_and_write": false, 00:19:44.060 "abort": true, 00:19:44.060 "seek_hole": false, 00:19:44.060 "seek_data": false, 00:19:44.060 "copy": true, 00:19:44.060 "nvme_iov_md": false 00:19:44.060 }, 00:19:44.060 "memory_domains": [ 00:19:44.060 { 00:19:44.060 "dma_device_id": "system", 00:19:44.060 "dma_device_type": 1 00:19:44.060 }, 00:19:44.060 { 00:19:44.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.060 "dma_device_type": 2 00:19:44.060 } 00:19:44.060 ], 00:19:44.060 "driver_specific": {} 00:19:44.060 } 00:19:44.060 ] 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.060 "name": "Existed_Raid", 00:19:44.060 "uuid": "94c71edc-9e5e-41de-bfe2-40d9dd98cf68", 00:19:44.060 "strip_size_kb": 0, 00:19:44.060 "state": "configuring", 00:19:44.060 "raid_level": "raid1", 00:19:44.060 "superblock": true, 00:19:44.060 "num_base_bdevs": 2, 00:19:44.060 "num_base_bdevs_discovered": 1, 00:19:44.060 "num_base_bdevs_operational": 2, 00:19:44.060 "base_bdevs_list": [ 00:19:44.060 { 00:19:44.060 "name": "BaseBdev1", 00:19:44.060 "uuid": "f597993b-09bf-4f8b-9c33-543045ba4e58", 00:19:44.060 "is_configured": true, 00:19:44.060 "data_offset": 256, 00:19:44.060 "data_size": 7936 00:19:44.060 }, 00:19:44.060 { 00:19:44.060 "name": "BaseBdev2", 00:19:44.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.060 "is_configured": false, 00:19:44.060 "data_offset": 0, 00:19:44.060 "data_size": 0 00:19:44.060 } 00:19:44.060 ] 00:19:44.060 }' 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.060 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.629 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:44.629 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.629 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.630 [2024-11-05 11:35:43.618382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:44.630 [2024-11-05 11:35:43.618463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.630 [2024-11-05 11:35:43.630458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:44.630 [2024-11-05 11:35:43.632167] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:44.630 [2024-11-05 11:35:43.632200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.630 "name": "Existed_Raid", 00:19:44.630 "uuid": "a58ea932-dab9-4433-a63d-db8a0663011d", 00:19:44.630 "strip_size_kb": 0, 00:19:44.630 "state": "configuring", 00:19:44.630 "raid_level": "raid1", 00:19:44.630 "superblock": true, 00:19:44.630 "num_base_bdevs": 2, 00:19:44.630 "num_base_bdevs_discovered": 1, 00:19:44.630 "num_base_bdevs_operational": 2, 00:19:44.630 "base_bdevs_list": [ 00:19:44.630 { 00:19:44.630 "name": "BaseBdev1", 00:19:44.630 "uuid": "f597993b-09bf-4f8b-9c33-543045ba4e58", 00:19:44.630 "is_configured": true, 00:19:44.630 "data_offset": 256, 00:19:44.630 "data_size": 7936 00:19:44.630 }, 00:19:44.630 { 00:19:44.630 "name": "BaseBdev2", 00:19:44.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.630 "is_configured": false, 00:19:44.630 "data_offset": 0, 00:19:44.630 "data_size": 0 00:19:44.630 } 00:19:44.630 ] 00:19:44.630 }' 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.630 11:35:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.891 [2024-11-05 11:35:44.067172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:44.891 [2024-11-05 11:35:44.067454] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:44.891 [2024-11-05 11:35:44.067502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:44.891 [2024-11-05 11:35:44.067604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:44.891 [2024-11-05 11:35:44.067702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:44.891 [2024-11-05 11:35:44.067739] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:44.891 [2024-11-05 11:35:44.067834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.891 BaseBdev2 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.891 [ 00:19:44.891 { 00:19:44.891 "name": "BaseBdev2", 00:19:44.891 "aliases": [ 00:19:44.891 "c465a16d-7a41-4bbe-885e-36f7eea4f93a" 00:19:44.891 ], 00:19:44.891 "product_name": "Malloc disk", 00:19:44.891 "block_size": 4128, 00:19:44.891 "num_blocks": 8192, 00:19:44.891 "uuid": "c465a16d-7a41-4bbe-885e-36f7eea4f93a", 00:19:44.891 "md_size": 32, 00:19:44.891 "md_interleave": true, 00:19:44.891 "dif_type": 0, 00:19:44.891 "assigned_rate_limits": { 00:19:44.891 "rw_ios_per_sec": 0, 00:19:44.891 "rw_mbytes_per_sec": 0, 00:19:44.891 "r_mbytes_per_sec": 0, 00:19:44.891 "w_mbytes_per_sec": 0 00:19:44.891 }, 00:19:44.891 "claimed": true, 00:19:44.891 "claim_type": "exclusive_write", 00:19:44.891 "zoned": false, 00:19:44.891 "supported_io_types": { 00:19:44.891 "read": true, 00:19:44.891 "write": true, 00:19:44.891 "unmap": true, 00:19:44.891 "flush": true, 00:19:44.891 "reset": true, 00:19:44.891 "nvme_admin": false, 00:19:44.891 "nvme_io": false, 00:19:44.891 "nvme_io_md": false, 00:19:44.891 "write_zeroes": true, 00:19:44.891 "zcopy": true, 00:19:44.891 "get_zone_info": false, 00:19:44.891 "zone_management": false, 00:19:44.891 "zone_append": false, 00:19:44.891 "compare": false, 00:19:44.891 "compare_and_write": false, 00:19:44.891 "abort": true, 00:19:44.891 "seek_hole": false, 00:19:44.891 "seek_data": false, 00:19:44.891 "copy": true, 00:19:44.891 "nvme_iov_md": false 00:19:44.891 }, 00:19:44.891 "memory_domains": [ 00:19:44.891 { 00:19:44.891 "dma_device_id": "system", 00:19:44.891 "dma_device_type": 1 00:19:44.891 }, 00:19:44.891 { 00:19:44.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.891 "dma_device_type": 2 00:19:44.891 } 00:19:44.891 ], 00:19:44.891 "driver_specific": {} 00:19:44.891 } 00:19:44.891 ] 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.891 "name": "Existed_Raid", 00:19:44.891 "uuid": "a58ea932-dab9-4433-a63d-db8a0663011d", 00:19:44.891 "strip_size_kb": 0, 00:19:44.891 "state": "online", 00:19:44.891 "raid_level": "raid1", 00:19:44.891 "superblock": true, 00:19:44.891 "num_base_bdevs": 2, 00:19:44.891 "num_base_bdevs_discovered": 2, 00:19:44.891 "num_base_bdevs_operational": 2, 00:19:44.891 "base_bdevs_list": [ 00:19:44.891 { 00:19:44.891 "name": "BaseBdev1", 00:19:44.891 "uuid": "f597993b-09bf-4f8b-9c33-543045ba4e58", 00:19:44.891 "is_configured": true, 00:19:44.891 "data_offset": 256, 00:19:44.891 "data_size": 7936 00:19:44.891 }, 00:19:44.891 { 00:19:44.891 "name": "BaseBdev2", 00:19:44.891 "uuid": "c465a16d-7a41-4bbe-885e-36f7eea4f93a", 00:19:44.891 "is_configured": true, 00:19:44.891 "data_offset": 256, 00:19:44.891 "data_size": 7936 00:19:44.891 } 00:19:44.891 ] 00:19:44.891 }' 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.891 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:45.461 [2024-11-05 11:35:44.570560] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:45.461 "name": "Existed_Raid", 00:19:45.461 "aliases": [ 00:19:45.461 "a58ea932-dab9-4433-a63d-db8a0663011d" 00:19:45.461 ], 00:19:45.461 "product_name": "Raid Volume", 00:19:45.461 "block_size": 4128, 00:19:45.461 "num_blocks": 7936, 00:19:45.461 "uuid": "a58ea932-dab9-4433-a63d-db8a0663011d", 00:19:45.461 "md_size": 32, 00:19:45.461 "md_interleave": true, 00:19:45.461 "dif_type": 0, 00:19:45.461 "assigned_rate_limits": { 00:19:45.461 "rw_ios_per_sec": 0, 00:19:45.461 "rw_mbytes_per_sec": 0, 00:19:45.461 "r_mbytes_per_sec": 0, 00:19:45.461 "w_mbytes_per_sec": 0 00:19:45.461 }, 00:19:45.461 "claimed": false, 00:19:45.461 "zoned": false, 00:19:45.461 "supported_io_types": { 00:19:45.461 "read": true, 00:19:45.461 "write": true, 00:19:45.461 "unmap": false, 00:19:45.461 "flush": false, 00:19:45.461 "reset": true, 00:19:45.461 "nvme_admin": false, 00:19:45.461 "nvme_io": false, 00:19:45.461 "nvme_io_md": false, 00:19:45.461 "write_zeroes": true, 00:19:45.461 "zcopy": false, 00:19:45.461 "get_zone_info": false, 00:19:45.461 "zone_management": false, 00:19:45.461 "zone_append": false, 00:19:45.461 "compare": false, 00:19:45.461 "compare_and_write": false, 00:19:45.461 "abort": false, 00:19:45.461 "seek_hole": false, 00:19:45.461 "seek_data": false, 00:19:45.461 "copy": false, 00:19:45.461 "nvme_iov_md": false 00:19:45.461 }, 00:19:45.461 "memory_domains": [ 00:19:45.461 { 00:19:45.461 "dma_device_id": "system", 00:19:45.461 "dma_device_type": 1 00:19:45.461 }, 00:19:45.461 { 00:19:45.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:45.461 "dma_device_type": 2 00:19:45.461 }, 00:19:45.461 { 00:19:45.461 "dma_device_id": "system", 00:19:45.461 "dma_device_type": 1 00:19:45.461 }, 00:19:45.461 { 00:19:45.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:45.461 "dma_device_type": 2 00:19:45.461 } 00:19:45.461 ], 00:19:45.461 "driver_specific": { 00:19:45.461 "raid": { 00:19:45.461 "uuid": "a58ea932-dab9-4433-a63d-db8a0663011d", 00:19:45.461 "strip_size_kb": 0, 00:19:45.461 "state": "online", 00:19:45.461 "raid_level": "raid1", 00:19:45.461 "superblock": true, 00:19:45.461 "num_base_bdevs": 2, 00:19:45.461 "num_base_bdevs_discovered": 2, 00:19:45.461 "num_base_bdevs_operational": 2, 00:19:45.461 "base_bdevs_list": [ 00:19:45.461 { 00:19:45.461 "name": "BaseBdev1", 00:19:45.461 "uuid": "f597993b-09bf-4f8b-9c33-543045ba4e58", 00:19:45.461 "is_configured": true, 00:19:45.461 "data_offset": 256, 00:19:45.461 "data_size": 7936 00:19:45.461 }, 00:19:45.461 { 00:19:45.461 "name": "BaseBdev2", 00:19:45.461 "uuid": "c465a16d-7a41-4bbe-885e-36f7eea4f93a", 00:19:45.461 "is_configured": true, 00:19:45.461 "data_offset": 256, 00:19:45.461 "data_size": 7936 00:19:45.461 } 00:19:45.461 ] 00:19:45.461 } 00:19:45.461 } 00:19:45.461 }' 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:45.461 BaseBdev2' 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.461 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.721 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.721 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:45.721 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:45.721 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.721 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.722 [2024-11-05 11:35:44.781990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.722 "name": "Existed_Raid", 00:19:45.722 "uuid": "a58ea932-dab9-4433-a63d-db8a0663011d", 00:19:45.722 "strip_size_kb": 0, 00:19:45.722 "state": "online", 00:19:45.722 "raid_level": "raid1", 00:19:45.722 "superblock": true, 00:19:45.722 "num_base_bdevs": 2, 00:19:45.722 "num_base_bdevs_discovered": 1, 00:19:45.722 "num_base_bdevs_operational": 1, 00:19:45.722 "base_bdevs_list": [ 00:19:45.722 { 00:19:45.722 "name": null, 00:19:45.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.722 "is_configured": false, 00:19:45.722 "data_offset": 0, 00:19:45.722 "data_size": 7936 00:19:45.722 }, 00:19:45.722 { 00:19:45.722 "name": "BaseBdev2", 00:19:45.722 "uuid": "c465a16d-7a41-4bbe-885e-36f7eea4f93a", 00:19:45.722 "is_configured": true, 00:19:45.722 "data_offset": 256, 00:19:45.722 "data_size": 7936 00:19:45.722 } 00:19:45.722 ] 00:19:45.722 }' 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.722 11:35:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 [2024-11-05 11:35:45.358904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:46.338 [2024-11-05 11:35:45.359076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:46.338 [2024-11-05 11:35:45.450251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:46.338 [2024-11-05 11:35:45.450374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:46.338 [2024-11-05 11:35:45.450415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88468 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88468 ']' 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88468 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88468 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:46.338 killing process with pid 88468 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88468' 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 88468 00:19:46.338 [2024-11-05 11:35:45.547048] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:46.338 11:35:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 88468 00:19:46.338 [2024-11-05 11:35:45.563117] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:47.722 11:35:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:47.722 00:19:47.722 real 0m4.805s 00:19:47.722 user 0m6.881s 00:19:47.722 sys 0m0.841s 00:19:47.722 11:35:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:47.722 11:35:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.722 ************************************ 00:19:47.722 END TEST raid_state_function_test_sb_md_interleaved 00:19:47.722 ************************************ 00:19:47.722 11:35:46 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:47.722 11:35:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:47.722 11:35:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:47.722 11:35:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:47.722 ************************************ 00:19:47.722 START TEST raid_superblock_test_md_interleaved 00:19:47.722 ************************************ 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88710 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88710 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:47.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88710 ']' 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.722 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:47.723 11:35:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.723 [2024-11-05 11:35:46.785524] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:19:47.723 [2024-11-05 11:35:46.785682] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88710 ] 00:19:47.723 [2024-11-05 11:35:46.966225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.982 [2024-11-05 11:35:47.074765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.242 [2024-11-05 11:35:47.260342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:48.242 [2024-11-05 11:35:47.260397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:48.502 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:48.502 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.503 malloc1 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.503 [2024-11-05 11:35:47.646889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:48.503 [2024-11-05 11:35:47.647022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.503 [2024-11-05 11:35:47.647060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:48.503 [2024-11-05 11:35:47.647087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.503 [2024-11-05 11:35:47.648899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.503 [2024-11-05 11:35:47.648963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:48.503 pt1 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.503 malloc2 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.503 [2024-11-05 11:35:47.704665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:48.503 [2024-11-05 11:35:47.704764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.503 [2024-11-05 11:35:47.704800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:48.503 [2024-11-05 11:35:47.704823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.503 [2024-11-05 11:35:47.706500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.503 [2024-11-05 11:35:47.706561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:48.503 pt2 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.503 [2024-11-05 11:35:47.716684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:48.503 [2024-11-05 11:35:47.718353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:48.503 [2024-11-05 11:35:47.718557] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:48.503 [2024-11-05 11:35:47.718600] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:48.503 [2024-11-05 11:35:47.718683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:48.503 [2024-11-05 11:35:47.718778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:48.503 [2024-11-05 11:35:47.718816] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:48.503 [2024-11-05 11:35:47.718884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.503 "name": "raid_bdev1", 00:19:48.503 "uuid": "f988a309-0945-4088-896f-db42ba588d4c", 00:19:48.503 "strip_size_kb": 0, 00:19:48.503 "state": "online", 00:19:48.503 "raid_level": "raid1", 00:19:48.503 "superblock": true, 00:19:48.503 "num_base_bdevs": 2, 00:19:48.503 "num_base_bdevs_discovered": 2, 00:19:48.503 "num_base_bdevs_operational": 2, 00:19:48.503 "base_bdevs_list": [ 00:19:48.503 { 00:19:48.503 "name": "pt1", 00:19:48.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:48.503 "is_configured": true, 00:19:48.503 "data_offset": 256, 00:19:48.503 "data_size": 7936 00:19:48.503 }, 00:19:48.503 { 00:19:48.503 "name": "pt2", 00:19:48.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:48.503 "is_configured": true, 00:19:48.503 "data_offset": 256, 00:19:48.503 "data_size": 7936 00:19:48.503 } 00:19:48.503 ] 00:19:48.503 }' 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.503 11:35:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.073 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:49.073 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:49.073 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:49.073 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:49.073 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:49.073 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:49.073 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:49.073 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:49.073 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.073 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.073 [2024-11-05 11:35:48.196061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:49.073 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.073 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:49.073 "name": "raid_bdev1", 00:19:49.073 "aliases": [ 00:19:49.073 "f988a309-0945-4088-896f-db42ba588d4c" 00:19:49.073 ], 00:19:49.073 "product_name": "Raid Volume", 00:19:49.073 "block_size": 4128, 00:19:49.073 "num_blocks": 7936, 00:19:49.073 "uuid": "f988a309-0945-4088-896f-db42ba588d4c", 00:19:49.073 "md_size": 32, 00:19:49.073 "md_interleave": true, 00:19:49.073 "dif_type": 0, 00:19:49.073 "assigned_rate_limits": { 00:19:49.073 "rw_ios_per_sec": 0, 00:19:49.073 "rw_mbytes_per_sec": 0, 00:19:49.073 "r_mbytes_per_sec": 0, 00:19:49.073 "w_mbytes_per_sec": 0 00:19:49.073 }, 00:19:49.073 "claimed": false, 00:19:49.073 "zoned": false, 00:19:49.073 "supported_io_types": { 00:19:49.073 "read": true, 00:19:49.073 "write": true, 00:19:49.073 "unmap": false, 00:19:49.073 "flush": false, 00:19:49.073 "reset": true, 00:19:49.073 "nvme_admin": false, 00:19:49.073 "nvme_io": false, 00:19:49.073 "nvme_io_md": false, 00:19:49.073 "write_zeroes": true, 00:19:49.073 "zcopy": false, 00:19:49.073 "get_zone_info": false, 00:19:49.073 "zone_management": false, 00:19:49.073 "zone_append": false, 00:19:49.073 "compare": false, 00:19:49.073 "compare_and_write": false, 00:19:49.073 "abort": false, 00:19:49.073 "seek_hole": false, 00:19:49.073 "seek_data": false, 00:19:49.073 "copy": false, 00:19:49.073 "nvme_iov_md": false 00:19:49.073 }, 00:19:49.073 "memory_domains": [ 00:19:49.073 { 00:19:49.073 "dma_device_id": "system", 00:19:49.073 "dma_device_type": 1 00:19:49.073 }, 00:19:49.073 { 00:19:49.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.073 "dma_device_type": 2 00:19:49.073 }, 00:19:49.073 { 00:19:49.073 "dma_device_id": "system", 00:19:49.073 "dma_device_type": 1 00:19:49.073 }, 00:19:49.073 { 00:19:49.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.073 "dma_device_type": 2 00:19:49.073 } 00:19:49.073 ], 00:19:49.073 "driver_specific": { 00:19:49.073 "raid": { 00:19:49.073 "uuid": "f988a309-0945-4088-896f-db42ba588d4c", 00:19:49.073 "strip_size_kb": 0, 00:19:49.073 "state": "online", 00:19:49.073 "raid_level": "raid1", 00:19:49.073 "superblock": true, 00:19:49.073 "num_base_bdevs": 2, 00:19:49.073 "num_base_bdevs_discovered": 2, 00:19:49.073 "num_base_bdevs_operational": 2, 00:19:49.073 "base_bdevs_list": [ 00:19:49.073 { 00:19:49.073 "name": "pt1", 00:19:49.073 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:49.073 "is_configured": true, 00:19:49.073 "data_offset": 256, 00:19:49.073 "data_size": 7936 00:19:49.073 }, 00:19:49.073 { 00:19:49.073 "name": "pt2", 00:19:49.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:49.073 "is_configured": true, 00:19:49.073 "data_offset": 256, 00:19:49.073 "data_size": 7936 00:19:49.073 } 00:19:49.073 ] 00:19:49.073 } 00:19:49.074 } 00:19:49.074 }' 00:19:49.074 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:49.074 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:49.074 pt2' 00:19:49.074 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.074 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:49.074 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:49.074 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:49.074 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.074 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.074 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.334 [2024-11-05 11:35:48.427616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f988a309-0945-4088-896f-db42ba588d4c 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z f988a309-0945-4088-896f-db42ba588d4c ']' 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.334 [2024-11-05 11:35:48.475333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:49.334 [2024-11-05 11:35:48.475394] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:49.334 [2024-11-05 11:35:48.475479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:49.334 [2024-11-05 11:35:48.475537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:49.334 [2024-11-05 11:35:48.475570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.334 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.335 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.335 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:49.335 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:49.335 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:19:49.335 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:49.335 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:49.335 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.335 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:49.335 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.335 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:49.335 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.335 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.335 [2024-11-05 11:35:48.607247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:49.335 [2024-11-05 11:35:48.609070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:49.335 [2024-11-05 11:35:48.609133] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:49.335 [2024-11-05 11:35:48.609238] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:49.335 [2024-11-05 11:35:48.609287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:49.335 [2024-11-05 11:35:48.609309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:49.595 request: 00:19:49.595 { 00:19:49.595 "name": "raid_bdev1", 00:19:49.595 "raid_level": "raid1", 00:19:49.595 "base_bdevs": [ 00:19:49.595 "malloc1", 00:19:49.595 "malloc2" 00:19:49.595 ], 00:19:49.595 "superblock": false, 00:19:49.595 "method": "bdev_raid_create", 00:19:49.595 "req_id": 1 00:19:49.595 } 00:19:49.595 Got JSON-RPC error response 00:19:49.595 response: 00:19:49.595 { 00:19:49.595 "code": -17, 00:19:49.595 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:49.595 } 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.595 [2024-11-05 11:35:48.675097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:49.595 [2024-11-05 11:35:48.675200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.595 [2024-11-05 11:35:48.675239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:49.595 [2024-11-05 11:35:48.675268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.595 [2024-11-05 11:35:48.677036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.595 [2024-11-05 11:35:48.677101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:49.595 [2024-11-05 11:35:48.677166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:49.595 [2024-11-05 11:35:48.677238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:49.595 pt1 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.595 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.595 "name": "raid_bdev1", 00:19:49.595 "uuid": "f988a309-0945-4088-896f-db42ba588d4c", 00:19:49.595 "strip_size_kb": 0, 00:19:49.595 "state": "configuring", 00:19:49.595 "raid_level": "raid1", 00:19:49.595 "superblock": true, 00:19:49.595 "num_base_bdevs": 2, 00:19:49.595 "num_base_bdevs_discovered": 1, 00:19:49.595 "num_base_bdevs_operational": 2, 00:19:49.595 "base_bdevs_list": [ 00:19:49.595 { 00:19:49.595 "name": "pt1", 00:19:49.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:49.595 "is_configured": true, 00:19:49.595 "data_offset": 256, 00:19:49.595 "data_size": 7936 00:19:49.595 }, 00:19:49.595 { 00:19:49.595 "name": null, 00:19:49.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:49.596 "is_configured": false, 00:19:49.596 "data_offset": 256, 00:19:49.596 "data_size": 7936 00:19:49.596 } 00:19:49.596 ] 00:19:49.596 }' 00:19:49.596 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.596 11:35:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.165 [2024-11-05 11:35:49.166234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:50.165 [2024-11-05 11:35:49.166329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.165 [2024-11-05 11:35:49.166362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:50.165 [2024-11-05 11:35:49.166390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.165 [2024-11-05 11:35:49.166515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.165 [2024-11-05 11:35:49.166552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:50.165 [2024-11-05 11:35:49.166602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:50.165 [2024-11-05 11:35:49.166666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:50.165 [2024-11-05 11:35:49.166755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:50.165 [2024-11-05 11:35:49.166791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:50.165 [2024-11-05 11:35:49.166866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:50.165 [2024-11-05 11:35:49.166965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:50.165 [2024-11-05 11:35:49.166999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:50.165 [2024-11-05 11:35:49.167083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.165 pt2 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.165 "name": "raid_bdev1", 00:19:50.165 "uuid": "f988a309-0945-4088-896f-db42ba588d4c", 00:19:50.165 "strip_size_kb": 0, 00:19:50.165 "state": "online", 00:19:50.165 "raid_level": "raid1", 00:19:50.165 "superblock": true, 00:19:50.165 "num_base_bdevs": 2, 00:19:50.165 "num_base_bdevs_discovered": 2, 00:19:50.165 "num_base_bdevs_operational": 2, 00:19:50.165 "base_bdevs_list": [ 00:19:50.165 { 00:19:50.165 "name": "pt1", 00:19:50.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:50.165 "is_configured": true, 00:19:50.165 "data_offset": 256, 00:19:50.165 "data_size": 7936 00:19:50.165 }, 00:19:50.165 { 00:19:50.165 "name": "pt2", 00:19:50.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:50.165 "is_configured": true, 00:19:50.165 "data_offset": 256, 00:19:50.165 "data_size": 7936 00:19:50.165 } 00:19:50.165 ] 00:19:50.165 }' 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.165 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.425 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:50.425 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:50.425 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:50.425 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:50.425 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:50.425 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:50.425 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:50.425 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:50.425 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.425 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.425 [2024-11-05 11:35:49.609688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:50.425 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.425 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:50.425 "name": "raid_bdev1", 00:19:50.425 "aliases": [ 00:19:50.425 "f988a309-0945-4088-896f-db42ba588d4c" 00:19:50.425 ], 00:19:50.425 "product_name": "Raid Volume", 00:19:50.425 "block_size": 4128, 00:19:50.425 "num_blocks": 7936, 00:19:50.425 "uuid": "f988a309-0945-4088-896f-db42ba588d4c", 00:19:50.425 "md_size": 32, 00:19:50.425 "md_interleave": true, 00:19:50.425 "dif_type": 0, 00:19:50.425 "assigned_rate_limits": { 00:19:50.425 "rw_ios_per_sec": 0, 00:19:50.425 "rw_mbytes_per_sec": 0, 00:19:50.425 "r_mbytes_per_sec": 0, 00:19:50.425 "w_mbytes_per_sec": 0 00:19:50.425 }, 00:19:50.425 "claimed": false, 00:19:50.425 "zoned": false, 00:19:50.425 "supported_io_types": { 00:19:50.425 "read": true, 00:19:50.425 "write": true, 00:19:50.425 "unmap": false, 00:19:50.425 "flush": false, 00:19:50.425 "reset": true, 00:19:50.425 "nvme_admin": false, 00:19:50.425 "nvme_io": false, 00:19:50.425 "nvme_io_md": false, 00:19:50.425 "write_zeroes": true, 00:19:50.425 "zcopy": false, 00:19:50.425 "get_zone_info": false, 00:19:50.425 "zone_management": false, 00:19:50.425 "zone_append": false, 00:19:50.425 "compare": false, 00:19:50.425 "compare_and_write": false, 00:19:50.425 "abort": false, 00:19:50.425 "seek_hole": false, 00:19:50.425 "seek_data": false, 00:19:50.425 "copy": false, 00:19:50.425 "nvme_iov_md": false 00:19:50.425 }, 00:19:50.425 "memory_domains": [ 00:19:50.425 { 00:19:50.425 "dma_device_id": "system", 00:19:50.425 "dma_device_type": 1 00:19:50.425 }, 00:19:50.425 { 00:19:50.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.425 "dma_device_type": 2 00:19:50.425 }, 00:19:50.425 { 00:19:50.425 "dma_device_id": "system", 00:19:50.425 "dma_device_type": 1 00:19:50.425 }, 00:19:50.425 { 00:19:50.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.425 "dma_device_type": 2 00:19:50.425 } 00:19:50.425 ], 00:19:50.425 "driver_specific": { 00:19:50.425 "raid": { 00:19:50.425 "uuid": "f988a309-0945-4088-896f-db42ba588d4c", 00:19:50.425 "strip_size_kb": 0, 00:19:50.425 "state": "online", 00:19:50.425 "raid_level": "raid1", 00:19:50.425 "superblock": true, 00:19:50.425 "num_base_bdevs": 2, 00:19:50.425 "num_base_bdevs_discovered": 2, 00:19:50.425 "num_base_bdevs_operational": 2, 00:19:50.425 "base_bdevs_list": [ 00:19:50.425 { 00:19:50.425 "name": "pt1", 00:19:50.425 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:50.425 "is_configured": true, 00:19:50.425 "data_offset": 256, 00:19:50.425 "data_size": 7936 00:19:50.425 }, 00:19:50.425 { 00:19:50.425 "name": "pt2", 00:19:50.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:50.425 "is_configured": true, 00:19:50.425 "data_offset": 256, 00:19:50.425 "data_size": 7936 00:19:50.425 } 00:19:50.425 ] 00:19:50.425 } 00:19:50.425 } 00:19:50.425 }' 00:19:50.425 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:50.425 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:50.425 pt2' 00:19:50.425 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.685 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:50.686 [2024-11-05 11:35:49.845320] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' f988a309-0945-4088-896f-db42ba588d4c '!=' f988a309-0945-4088-896f-db42ba588d4c ']' 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.686 [2024-11-05 11:35:49.893034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.686 "name": "raid_bdev1", 00:19:50.686 "uuid": "f988a309-0945-4088-896f-db42ba588d4c", 00:19:50.686 "strip_size_kb": 0, 00:19:50.686 "state": "online", 00:19:50.686 "raid_level": "raid1", 00:19:50.686 "superblock": true, 00:19:50.686 "num_base_bdevs": 2, 00:19:50.686 "num_base_bdevs_discovered": 1, 00:19:50.686 "num_base_bdevs_operational": 1, 00:19:50.686 "base_bdevs_list": [ 00:19:50.686 { 00:19:50.686 "name": null, 00:19:50.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.686 "is_configured": false, 00:19:50.686 "data_offset": 0, 00:19:50.686 "data_size": 7936 00:19:50.686 }, 00:19:50.686 { 00:19:50.686 "name": "pt2", 00:19:50.686 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:50.686 "is_configured": true, 00:19:50.686 "data_offset": 256, 00:19:50.686 "data_size": 7936 00:19:50.686 } 00:19:50.686 ] 00:19:50.686 }' 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.686 11:35:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:51.255 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:51.255 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.255 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:51.255 [2024-11-05 11:35:50.356203] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:51.255 [2024-11-05 11:35:50.356269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:51.255 [2024-11-05 11:35:50.356340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:51.255 [2024-11-05 11:35:50.356393] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:51.255 [2024-11-05 11:35:50.356425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:51.255 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.255 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.255 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:51.255 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.255 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:51.255 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.255 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:51.255 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:51.255 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:51.255 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:51.255 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:51.256 [2024-11-05 11:35:50.432073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:51.256 [2024-11-05 11:35:50.432179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.256 [2024-11-05 11:35:50.432209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:51.256 [2024-11-05 11:35:50.432235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.256 [2024-11-05 11:35:50.433994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.256 [2024-11-05 11:35:50.434063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:51.256 [2024-11-05 11:35:50.434122] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:51.256 [2024-11-05 11:35:50.434188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:51.256 [2024-11-05 11:35:50.434258] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:51.256 [2024-11-05 11:35:50.434309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:51.256 [2024-11-05 11:35:50.434399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:51.256 [2024-11-05 11:35:50.434492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:51.256 [2024-11-05 11:35:50.434527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:51.256 [2024-11-05 11:35:50.434609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.256 pt2 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.256 "name": "raid_bdev1", 00:19:51.256 "uuid": "f988a309-0945-4088-896f-db42ba588d4c", 00:19:51.256 "strip_size_kb": 0, 00:19:51.256 "state": "online", 00:19:51.256 "raid_level": "raid1", 00:19:51.256 "superblock": true, 00:19:51.256 "num_base_bdevs": 2, 00:19:51.256 "num_base_bdevs_discovered": 1, 00:19:51.256 "num_base_bdevs_operational": 1, 00:19:51.256 "base_bdevs_list": [ 00:19:51.256 { 00:19:51.256 "name": null, 00:19:51.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.256 "is_configured": false, 00:19:51.256 "data_offset": 256, 00:19:51.256 "data_size": 7936 00:19:51.256 }, 00:19:51.256 { 00:19:51.256 "name": "pt2", 00:19:51.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:51.256 "is_configured": true, 00:19:51.256 "data_offset": 256, 00:19:51.256 "data_size": 7936 00:19:51.256 } 00:19:51.256 ] 00:19:51.256 }' 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.256 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:51.826 [2024-11-05 11:35:50.895253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:51.826 [2024-11-05 11:35:50.895323] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:51.826 [2024-11-05 11:35:50.895383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:51.826 [2024-11-05 11:35:50.895432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:51.826 [2024-11-05 11:35:50.895462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:51.826 [2024-11-05 11:35:50.955221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:51.826 [2024-11-05 11:35:50.955309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.826 [2024-11-05 11:35:50.955330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:51.826 [2024-11-05 11:35:50.955339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.826 [2024-11-05 11:35:50.957107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.826 [2024-11-05 11:35:50.957199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:51.826 [2024-11-05 11:35:50.957247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:51.826 [2024-11-05 11:35:50.957292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:51.826 [2024-11-05 11:35:50.957369] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:51.826 [2024-11-05 11:35:50.957388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:51.826 [2024-11-05 11:35:50.957402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:51.826 [2024-11-05 11:35:50.957457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:51.826 [2024-11-05 11:35:50.957509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:51.826 [2024-11-05 11:35:50.957516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:51.826 [2024-11-05 11:35:50.957564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:51.826 [2024-11-05 11:35:50.957617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:51.826 [2024-11-05 11:35:50.957628] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:51.826 [2024-11-05 11:35:50.957687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.826 pt1 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.826 "name": "raid_bdev1", 00:19:51.826 "uuid": "f988a309-0945-4088-896f-db42ba588d4c", 00:19:51.826 "strip_size_kb": 0, 00:19:51.826 "state": "online", 00:19:51.826 "raid_level": "raid1", 00:19:51.826 "superblock": true, 00:19:51.826 "num_base_bdevs": 2, 00:19:51.826 "num_base_bdevs_discovered": 1, 00:19:51.826 "num_base_bdevs_operational": 1, 00:19:51.826 "base_bdevs_list": [ 00:19:51.826 { 00:19:51.826 "name": null, 00:19:51.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.826 "is_configured": false, 00:19:51.826 "data_offset": 256, 00:19:51.826 "data_size": 7936 00:19:51.826 }, 00:19:51.826 { 00:19:51.826 "name": "pt2", 00:19:51.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:51.826 "is_configured": true, 00:19:51.826 "data_offset": 256, 00:19:51.826 "data_size": 7936 00:19:51.826 } 00:19:51.826 ] 00:19:51.826 }' 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.826 11:35:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:52.396 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:52.396 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:52.396 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.396 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:52.396 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.396 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:52.396 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:52.396 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:52.397 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.397 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:52.397 [2024-11-05 11:35:51.446496] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:52.397 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.397 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' f988a309-0945-4088-896f-db42ba588d4c '!=' f988a309-0945-4088-896f-db42ba588d4c ']' 00:19:52.397 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88710 00:19:52.397 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88710 ']' 00:19:52.397 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88710 00:19:52.397 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:19:52.397 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:52.397 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88710 00:19:52.397 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:52.397 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:52.397 killing process with pid 88710 00:19:52.397 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88710' 00:19:52.397 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 88710 00:19:52.397 [2024-11-05 11:35:51.520851] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:52.397 [2024-11-05 11:35:51.520910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:52.397 [2024-11-05 11:35:51.520943] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:52.397 [2024-11-05 11:35:51.520955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:52.397 11:35:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 88710 00:19:52.655 [2024-11-05 11:35:51.714036] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:53.596 11:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:53.596 00:19:53.596 real 0m6.063s 00:19:53.596 user 0m9.253s 00:19:53.596 sys 0m1.127s 00:19:53.596 ************************************ 00:19:53.596 END TEST raid_superblock_test_md_interleaved 00:19:53.596 ************************************ 00:19:53.596 11:35:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:53.596 11:35:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:53.596 11:35:52 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:53.596 11:35:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:53.596 11:35:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:53.596 11:35:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:53.596 ************************************ 00:19:53.596 START TEST raid_rebuild_test_sb_md_interleaved 00:19:53.596 ************************************ 00:19:53.596 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:19:53.596 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:53.596 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:53.596 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:53.596 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:53.596 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:53.596 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89033 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89033 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89033 ']' 00:19:53.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:53.597 11:35:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:53.857 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:53.857 Zero copy mechanism will not be used. 00:19:53.857 [2024-11-05 11:35:52.938467] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:19:53.857 [2024-11-05 11:35:52.938595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89033 ] 00:19:53.857 [2024-11-05 11:35:53.119247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.116 [2024-11-05 11:35:53.227871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.376 [2024-11-05 11:35:53.413178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.376 [2024-11-05 11:35:53.413235] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.637 BaseBdev1_malloc 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.637 [2024-11-05 11:35:53.800820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:54.637 [2024-11-05 11:35:53.800968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.637 [2024-11-05 11:35:53.801004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:54.637 [2024-11-05 11:35:53.801034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.637 [2024-11-05 11:35:53.802800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.637 [2024-11-05 11:35:53.802840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:54.637 BaseBdev1 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.637 BaseBdev2_malloc 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.637 [2024-11-05 11:35:53.853754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:54.637 [2024-11-05 11:35:53.853876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.637 [2024-11-05 11:35:53.853912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:54.637 [2024-11-05 11:35:53.853943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.637 [2024-11-05 11:35:53.855691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.637 [2024-11-05 11:35:53.855759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:54.637 BaseBdev2 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.637 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.897 spare_malloc 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.897 spare_delay 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.897 [2024-11-05 11:35:53.952451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:54.897 [2024-11-05 11:35:53.952572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.897 [2024-11-05 11:35:53.952595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:54.897 [2024-11-05 11:35:53.952607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.897 [2024-11-05 11:35:53.954334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.897 [2024-11-05 11:35:53.954372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:54.897 spare 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.897 [2024-11-05 11:35:53.964472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:54.897 [2024-11-05 11:35:53.966214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:54.897 [2024-11-05 11:35:53.966428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:54.897 [2024-11-05 11:35:53.966464] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:54.897 [2024-11-05 11:35:53.966559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:54.897 [2024-11-05 11:35:53.966655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:54.897 [2024-11-05 11:35:53.966690] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:54.897 [2024-11-05 11:35:53.966786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.897 11:35:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.897 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.897 "name": "raid_bdev1", 00:19:54.897 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:19:54.897 "strip_size_kb": 0, 00:19:54.897 "state": "online", 00:19:54.897 "raid_level": "raid1", 00:19:54.897 "superblock": true, 00:19:54.897 "num_base_bdevs": 2, 00:19:54.897 "num_base_bdevs_discovered": 2, 00:19:54.897 "num_base_bdevs_operational": 2, 00:19:54.897 "base_bdevs_list": [ 00:19:54.897 { 00:19:54.897 "name": "BaseBdev1", 00:19:54.897 "uuid": "f15f03e9-6926-5df7-a8ff-f6f9f0153ede", 00:19:54.897 "is_configured": true, 00:19:54.897 "data_offset": 256, 00:19:54.897 "data_size": 7936 00:19:54.897 }, 00:19:54.897 { 00:19:54.897 "name": "BaseBdev2", 00:19:54.897 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:19:54.897 "is_configured": true, 00:19:54.897 "data_offset": 256, 00:19:54.897 "data_size": 7936 00:19:54.898 } 00:19:54.898 ] 00:19:54.898 }' 00:19:54.898 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.898 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:55.157 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:55.157 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:55.157 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.157 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:55.157 [2024-11-05 11:35:54.411910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:55.417 [2024-11-05 11:35:54.487518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.417 "name": "raid_bdev1", 00:19:55.417 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:19:55.417 "strip_size_kb": 0, 00:19:55.417 "state": "online", 00:19:55.417 "raid_level": "raid1", 00:19:55.417 "superblock": true, 00:19:55.417 "num_base_bdevs": 2, 00:19:55.417 "num_base_bdevs_discovered": 1, 00:19:55.417 "num_base_bdevs_operational": 1, 00:19:55.417 "base_bdevs_list": [ 00:19:55.417 { 00:19:55.417 "name": null, 00:19:55.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.417 "is_configured": false, 00:19:55.417 "data_offset": 0, 00:19:55.417 "data_size": 7936 00:19:55.417 }, 00:19:55.417 { 00:19:55.417 "name": "BaseBdev2", 00:19:55.417 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:19:55.417 "is_configured": true, 00:19:55.417 "data_offset": 256, 00:19:55.417 "data_size": 7936 00:19:55.417 } 00:19:55.417 ] 00:19:55.417 }' 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.417 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:55.677 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:55.677 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.677 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:55.677 [2024-11-05 11:35:54.946752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:55.937 [2024-11-05 11:35:54.962624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:55.937 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.937 11:35:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:55.937 [2024-11-05 11:35:54.964520] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:56.876 11:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.876 11:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.876 11:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:56.876 11:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:56.876 11:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.876 11:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.876 11:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.876 11:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.876 11:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.876 11:35:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.876 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.876 "name": "raid_bdev1", 00:19:56.876 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:19:56.876 "strip_size_kb": 0, 00:19:56.876 "state": "online", 00:19:56.876 "raid_level": "raid1", 00:19:56.876 "superblock": true, 00:19:56.876 "num_base_bdevs": 2, 00:19:56.876 "num_base_bdevs_discovered": 2, 00:19:56.876 "num_base_bdevs_operational": 2, 00:19:56.876 "process": { 00:19:56.876 "type": "rebuild", 00:19:56.876 "target": "spare", 00:19:56.876 "progress": { 00:19:56.876 "blocks": 2560, 00:19:56.876 "percent": 32 00:19:56.876 } 00:19:56.876 }, 00:19:56.876 "base_bdevs_list": [ 00:19:56.876 { 00:19:56.876 "name": "spare", 00:19:56.876 "uuid": "927f9a44-7ac0-5444-abff-d2bbaf18d9af", 00:19:56.876 "is_configured": true, 00:19:56.876 "data_offset": 256, 00:19:56.876 "data_size": 7936 00:19:56.876 }, 00:19:56.876 { 00:19:56.876 "name": "BaseBdev2", 00:19:56.876 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:19:56.876 "is_configured": true, 00:19:56.876 "data_offset": 256, 00:19:56.876 "data_size": 7936 00:19:56.876 } 00:19:56.876 ] 00:19:56.876 }' 00:19:56.876 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.877 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.877 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.877 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.877 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:56.877 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.877 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.877 [2024-11-05 11:35:56.124312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:57.137 [2024-11-05 11:35:56.169236] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:57.137 [2024-11-05 11:35:56.169349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.137 [2024-11-05 11:35:56.169384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:57.137 [2024-11-05 11:35:56.169409] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.137 "name": "raid_bdev1", 00:19:57.137 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:19:57.137 "strip_size_kb": 0, 00:19:57.137 "state": "online", 00:19:57.137 "raid_level": "raid1", 00:19:57.137 "superblock": true, 00:19:57.137 "num_base_bdevs": 2, 00:19:57.137 "num_base_bdevs_discovered": 1, 00:19:57.137 "num_base_bdevs_operational": 1, 00:19:57.137 "base_bdevs_list": [ 00:19:57.137 { 00:19:57.137 "name": null, 00:19:57.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.137 "is_configured": false, 00:19:57.137 "data_offset": 0, 00:19:57.137 "data_size": 7936 00:19:57.137 }, 00:19:57.137 { 00:19:57.137 "name": "BaseBdev2", 00:19:57.137 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:19:57.137 "is_configured": true, 00:19:57.137 "data_offset": 256, 00:19:57.137 "data_size": 7936 00:19:57.137 } 00:19:57.137 ] 00:19:57.137 }' 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.137 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.396 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:57.396 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.396 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:57.396 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:57.396 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.656 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.656 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.656 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.656 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.656 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.656 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.656 "name": "raid_bdev1", 00:19:57.656 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:19:57.656 "strip_size_kb": 0, 00:19:57.656 "state": "online", 00:19:57.656 "raid_level": "raid1", 00:19:57.656 "superblock": true, 00:19:57.656 "num_base_bdevs": 2, 00:19:57.656 "num_base_bdevs_discovered": 1, 00:19:57.656 "num_base_bdevs_operational": 1, 00:19:57.656 "base_bdevs_list": [ 00:19:57.656 { 00:19:57.656 "name": null, 00:19:57.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.656 "is_configured": false, 00:19:57.656 "data_offset": 0, 00:19:57.656 "data_size": 7936 00:19:57.656 }, 00:19:57.656 { 00:19:57.656 "name": "BaseBdev2", 00:19:57.656 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:19:57.656 "is_configured": true, 00:19:57.656 "data_offset": 256, 00:19:57.656 "data_size": 7936 00:19:57.656 } 00:19:57.656 ] 00:19:57.656 }' 00:19:57.656 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.656 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:57.656 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.656 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:57.656 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:57.656 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.656 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.656 [2024-11-05 11:35:56.822192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:57.656 [2024-11-05 11:35:56.836567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:57.657 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.657 11:35:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:57.657 [2024-11-05 11:35:56.838325] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:58.595 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.595 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.595 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:58.595 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:58.595 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.595 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.595 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.595 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.595 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:58.595 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.855 "name": "raid_bdev1", 00:19:58.855 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:19:58.855 "strip_size_kb": 0, 00:19:58.855 "state": "online", 00:19:58.855 "raid_level": "raid1", 00:19:58.855 "superblock": true, 00:19:58.855 "num_base_bdevs": 2, 00:19:58.855 "num_base_bdevs_discovered": 2, 00:19:58.855 "num_base_bdevs_operational": 2, 00:19:58.855 "process": { 00:19:58.855 "type": "rebuild", 00:19:58.855 "target": "spare", 00:19:58.855 "progress": { 00:19:58.855 "blocks": 2560, 00:19:58.855 "percent": 32 00:19:58.855 } 00:19:58.855 }, 00:19:58.855 "base_bdevs_list": [ 00:19:58.855 { 00:19:58.855 "name": "spare", 00:19:58.855 "uuid": "927f9a44-7ac0-5444-abff-d2bbaf18d9af", 00:19:58.855 "is_configured": true, 00:19:58.855 "data_offset": 256, 00:19:58.855 "data_size": 7936 00:19:58.855 }, 00:19:58.855 { 00:19:58.855 "name": "BaseBdev2", 00:19:58.855 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:19:58.855 "is_configured": true, 00:19:58.855 "data_offset": 256, 00:19:58.855 "data_size": 7936 00:19:58.855 } 00:19:58.855 ] 00:19:58.855 }' 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:58.855 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=727 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.855 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.856 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:58.856 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.856 11:35:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.856 11:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.856 "name": "raid_bdev1", 00:19:58.856 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:19:58.856 "strip_size_kb": 0, 00:19:58.856 "state": "online", 00:19:58.856 "raid_level": "raid1", 00:19:58.856 "superblock": true, 00:19:58.856 "num_base_bdevs": 2, 00:19:58.856 "num_base_bdevs_discovered": 2, 00:19:58.856 "num_base_bdevs_operational": 2, 00:19:58.856 "process": { 00:19:58.856 "type": "rebuild", 00:19:58.856 "target": "spare", 00:19:58.856 "progress": { 00:19:58.856 "blocks": 2816, 00:19:58.856 "percent": 35 00:19:58.856 } 00:19:58.856 }, 00:19:58.856 "base_bdevs_list": [ 00:19:58.856 { 00:19:58.856 "name": "spare", 00:19:58.856 "uuid": "927f9a44-7ac0-5444-abff-d2bbaf18d9af", 00:19:58.856 "is_configured": true, 00:19:58.856 "data_offset": 256, 00:19:58.856 "data_size": 7936 00:19:58.856 }, 00:19:58.856 { 00:19:58.856 "name": "BaseBdev2", 00:19:58.856 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:19:58.856 "is_configured": true, 00:19:58.856 "data_offset": 256, 00:19:58.856 "data_size": 7936 00:19:58.856 } 00:19:58.856 ] 00:19:58.856 }' 00:19:58.856 11:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.856 11:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.856 11:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.856 11:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.856 11:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.236 "name": "raid_bdev1", 00:20:00.236 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:00.236 "strip_size_kb": 0, 00:20:00.236 "state": "online", 00:20:00.236 "raid_level": "raid1", 00:20:00.236 "superblock": true, 00:20:00.236 "num_base_bdevs": 2, 00:20:00.236 "num_base_bdevs_discovered": 2, 00:20:00.236 "num_base_bdevs_operational": 2, 00:20:00.236 "process": { 00:20:00.236 "type": "rebuild", 00:20:00.236 "target": "spare", 00:20:00.236 "progress": { 00:20:00.236 "blocks": 5632, 00:20:00.236 "percent": 70 00:20:00.236 } 00:20:00.236 }, 00:20:00.236 "base_bdevs_list": [ 00:20:00.236 { 00:20:00.236 "name": "spare", 00:20:00.236 "uuid": "927f9a44-7ac0-5444-abff-d2bbaf18d9af", 00:20:00.236 "is_configured": true, 00:20:00.236 "data_offset": 256, 00:20:00.236 "data_size": 7936 00:20:00.236 }, 00:20:00.236 { 00:20:00.236 "name": "BaseBdev2", 00:20:00.236 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:00.236 "is_configured": true, 00:20:00.236 "data_offset": 256, 00:20:00.236 "data_size": 7936 00:20:00.236 } 00:20:00.236 ] 00:20:00.236 }' 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.236 11:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:00.805 [2024-11-05 11:35:59.949954] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:00.805 [2024-11-05 11:35:59.950022] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:00.805 [2024-11-05 11:35:59.950118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.064 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:01.064 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:01.064 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.064 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:01.064 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:01.064 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.064 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.064 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.064 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.064 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.064 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.064 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.064 "name": "raid_bdev1", 00:20:01.064 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:01.064 "strip_size_kb": 0, 00:20:01.064 "state": "online", 00:20:01.064 "raid_level": "raid1", 00:20:01.064 "superblock": true, 00:20:01.064 "num_base_bdevs": 2, 00:20:01.064 "num_base_bdevs_discovered": 2, 00:20:01.064 "num_base_bdevs_operational": 2, 00:20:01.064 "base_bdevs_list": [ 00:20:01.064 { 00:20:01.064 "name": "spare", 00:20:01.064 "uuid": "927f9a44-7ac0-5444-abff-d2bbaf18d9af", 00:20:01.064 "is_configured": true, 00:20:01.064 "data_offset": 256, 00:20:01.064 "data_size": 7936 00:20:01.064 }, 00:20:01.064 { 00:20:01.064 "name": "BaseBdev2", 00:20:01.064 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:01.064 "is_configured": true, 00:20:01.064 "data_offset": 256, 00:20:01.064 "data_size": 7936 00:20:01.064 } 00:20:01.064 ] 00:20:01.064 }' 00:20:01.064 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.324 "name": "raid_bdev1", 00:20:01.324 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:01.324 "strip_size_kb": 0, 00:20:01.324 "state": "online", 00:20:01.324 "raid_level": "raid1", 00:20:01.324 "superblock": true, 00:20:01.324 "num_base_bdevs": 2, 00:20:01.324 "num_base_bdevs_discovered": 2, 00:20:01.324 "num_base_bdevs_operational": 2, 00:20:01.324 "base_bdevs_list": [ 00:20:01.324 { 00:20:01.324 "name": "spare", 00:20:01.324 "uuid": "927f9a44-7ac0-5444-abff-d2bbaf18d9af", 00:20:01.324 "is_configured": true, 00:20:01.324 "data_offset": 256, 00:20:01.324 "data_size": 7936 00:20:01.324 }, 00:20:01.324 { 00:20:01.324 "name": "BaseBdev2", 00:20:01.324 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:01.324 "is_configured": true, 00:20:01.324 "data_offset": 256, 00:20:01.324 "data_size": 7936 00:20:01.324 } 00:20:01.324 ] 00:20:01.324 }' 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.324 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.324 "name": "raid_bdev1", 00:20:01.324 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:01.324 "strip_size_kb": 0, 00:20:01.324 "state": "online", 00:20:01.324 "raid_level": "raid1", 00:20:01.324 "superblock": true, 00:20:01.324 "num_base_bdevs": 2, 00:20:01.324 "num_base_bdevs_discovered": 2, 00:20:01.324 "num_base_bdevs_operational": 2, 00:20:01.325 "base_bdevs_list": [ 00:20:01.325 { 00:20:01.325 "name": "spare", 00:20:01.325 "uuid": "927f9a44-7ac0-5444-abff-d2bbaf18d9af", 00:20:01.325 "is_configured": true, 00:20:01.325 "data_offset": 256, 00:20:01.325 "data_size": 7936 00:20:01.325 }, 00:20:01.325 { 00:20:01.325 "name": "BaseBdev2", 00:20:01.325 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:01.325 "is_configured": true, 00:20:01.325 "data_offset": 256, 00:20:01.325 "data_size": 7936 00:20:01.325 } 00:20:01.325 ] 00:20:01.325 }' 00:20:01.325 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.325 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.894 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:01.894 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.894 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.894 [2024-11-05 11:36:00.972801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:01.894 [2024-11-05 11:36:00.972880] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:01.894 [2024-11-05 11:36:00.972971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:01.894 [2024-11-05 11:36:00.973060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:01.894 [2024-11-05 11:36:00.973092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:01.894 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.894 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:20:01.894 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.894 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.894 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.894 11:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.894 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:01.894 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:01.894 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:01.894 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:01.894 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.894 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.894 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.894 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:01.894 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.894 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.894 [2024-11-05 11:36:01.032702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:01.894 [2024-11-05 11:36:01.032794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.894 [2024-11-05 11:36:01.032829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:01.894 [2024-11-05 11:36:01.032855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.894 [2024-11-05 11:36:01.034667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.894 [2024-11-05 11:36:01.034730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:01.894 [2024-11-05 11:36:01.034795] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:01.894 [2024-11-05 11:36:01.034850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:01.894 [2024-11-05 11:36:01.034969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:01.894 spare 00:20:01.894 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.894 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:01.894 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.894 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.894 [2024-11-05 11:36:01.134852] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:01.894 [2024-11-05 11:36:01.134921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:01.894 [2024-11-05 11:36:01.135015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:01.894 [2024-11-05 11:36:01.135112] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:01.894 [2024-11-05 11:36:01.135156] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:01.895 [2024-11-05 11:36:01.135299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.895 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.154 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.154 "name": "raid_bdev1", 00:20:02.154 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:02.154 "strip_size_kb": 0, 00:20:02.154 "state": "online", 00:20:02.154 "raid_level": "raid1", 00:20:02.154 "superblock": true, 00:20:02.154 "num_base_bdevs": 2, 00:20:02.154 "num_base_bdevs_discovered": 2, 00:20:02.154 "num_base_bdevs_operational": 2, 00:20:02.154 "base_bdevs_list": [ 00:20:02.154 { 00:20:02.154 "name": "spare", 00:20:02.154 "uuid": "927f9a44-7ac0-5444-abff-d2bbaf18d9af", 00:20:02.154 "is_configured": true, 00:20:02.154 "data_offset": 256, 00:20:02.154 "data_size": 7936 00:20:02.154 }, 00:20:02.154 { 00:20:02.154 "name": "BaseBdev2", 00:20:02.154 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:02.154 "is_configured": true, 00:20:02.154 "data_offset": 256, 00:20:02.154 "data_size": 7936 00:20:02.154 } 00:20:02.154 ] 00:20:02.154 }' 00:20:02.154 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.154 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.413 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:02.413 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.413 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:02.413 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:02.413 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.413 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.413 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.413 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.413 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.413 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.413 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.413 "name": "raid_bdev1", 00:20:02.413 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:02.413 "strip_size_kb": 0, 00:20:02.413 "state": "online", 00:20:02.413 "raid_level": "raid1", 00:20:02.413 "superblock": true, 00:20:02.413 "num_base_bdevs": 2, 00:20:02.413 "num_base_bdevs_discovered": 2, 00:20:02.413 "num_base_bdevs_operational": 2, 00:20:02.413 "base_bdevs_list": [ 00:20:02.413 { 00:20:02.413 "name": "spare", 00:20:02.413 "uuid": "927f9a44-7ac0-5444-abff-d2bbaf18d9af", 00:20:02.413 "is_configured": true, 00:20:02.413 "data_offset": 256, 00:20:02.413 "data_size": 7936 00:20:02.414 }, 00:20:02.414 { 00:20:02.414 "name": "BaseBdev2", 00:20:02.414 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:02.414 "is_configured": true, 00:20:02.414 "data_offset": 256, 00:20:02.414 "data_size": 7936 00:20:02.414 } 00:20:02.414 ] 00:20:02.414 }' 00:20:02.414 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.414 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:02.414 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.673 [2024-11-05 11:36:01.751536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.673 "name": "raid_bdev1", 00:20:02.673 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:02.673 "strip_size_kb": 0, 00:20:02.673 "state": "online", 00:20:02.673 "raid_level": "raid1", 00:20:02.673 "superblock": true, 00:20:02.673 "num_base_bdevs": 2, 00:20:02.673 "num_base_bdevs_discovered": 1, 00:20:02.673 "num_base_bdevs_operational": 1, 00:20:02.673 "base_bdevs_list": [ 00:20:02.673 { 00:20:02.673 "name": null, 00:20:02.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.673 "is_configured": false, 00:20:02.673 "data_offset": 0, 00:20:02.673 "data_size": 7936 00:20:02.673 }, 00:20:02.673 { 00:20:02.673 "name": "BaseBdev2", 00:20:02.673 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:02.673 "is_configured": true, 00:20:02.673 "data_offset": 256, 00:20:02.673 "data_size": 7936 00:20:02.673 } 00:20:02.673 ] 00:20:02.673 }' 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.673 11:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.933 11:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:02.933 11:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.933 11:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.933 [2024-11-05 11:36:02.178912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:02.933 [2024-11-05 11:36:02.179083] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:02.933 [2024-11-05 11:36:02.179165] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:02.933 [2024-11-05 11:36:02.179240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:02.933 [2024-11-05 11:36:02.194362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:02.933 11:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.933 11:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:02.933 [2024-11-05 11:36:02.196194] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:04.312 "name": "raid_bdev1", 00:20:04.312 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:04.312 "strip_size_kb": 0, 00:20:04.312 "state": "online", 00:20:04.312 "raid_level": "raid1", 00:20:04.312 "superblock": true, 00:20:04.312 "num_base_bdevs": 2, 00:20:04.312 "num_base_bdevs_discovered": 2, 00:20:04.312 "num_base_bdevs_operational": 2, 00:20:04.312 "process": { 00:20:04.312 "type": "rebuild", 00:20:04.312 "target": "spare", 00:20:04.312 "progress": { 00:20:04.312 "blocks": 2560, 00:20:04.312 "percent": 32 00:20:04.312 } 00:20:04.312 }, 00:20:04.312 "base_bdevs_list": [ 00:20:04.312 { 00:20:04.312 "name": "spare", 00:20:04.312 "uuid": "927f9a44-7ac0-5444-abff-d2bbaf18d9af", 00:20:04.312 "is_configured": true, 00:20:04.312 "data_offset": 256, 00:20:04.312 "data_size": 7936 00:20:04.312 }, 00:20:04.312 { 00:20:04.312 "name": "BaseBdev2", 00:20:04.312 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:04.312 "is_configured": true, 00:20:04.312 "data_offset": 256, 00:20:04.312 "data_size": 7936 00:20:04.312 } 00:20:04.312 ] 00:20:04.312 }' 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.312 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.312 [2024-11-05 11:36:03.356544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:04.312 [2024-11-05 11:36:03.400895] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:04.313 [2024-11-05 11:36:03.400952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.313 [2024-11-05 11:36:03.400966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:04.313 [2024-11-05 11:36:03.400974] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.313 "name": "raid_bdev1", 00:20:04.313 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:04.313 "strip_size_kb": 0, 00:20:04.313 "state": "online", 00:20:04.313 "raid_level": "raid1", 00:20:04.313 "superblock": true, 00:20:04.313 "num_base_bdevs": 2, 00:20:04.313 "num_base_bdevs_discovered": 1, 00:20:04.313 "num_base_bdevs_operational": 1, 00:20:04.313 "base_bdevs_list": [ 00:20:04.313 { 00:20:04.313 "name": null, 00:20:04.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.313 "is_configured": false, 00:20:04.313 "data_offset": 0, 00:20:04.313 "data_size": 7936 00:20:04.313 }, 00:20:04.313 { 00:20:04.313 "name": "BaseBdev2", 00:20:04.313 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:04.313 "is_configured": true, 00:20:04.313 "data_offset": 256, 00:20:04.313 "data_size": 7936 00:20:04.313 } 00:20:04.313 ] 00:20:04.313 }' 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.313 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.572 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:04.572 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.572 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.572 [2024-11-05 11:36:03.821640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:04.572 [2024-11-05 11:36:03.821747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.572 [2024-11-05 11:36:03.821785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:04.572 [2024-11-05 11:36:03.821814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.572 [2024-11-05 11:36:03.822024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.572 [2024-11-05 11:36:03.822071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:04.572 [2024-11-05 11:36:03.822148] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:04.572 [2024-11-05 11:36:03.822186] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:04.572 [2024-11-05 11:36:03.822223] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:04.572 [2024-11-05 11:36:03.822314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:04.572 [2024-11-05 11:36:03.836849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:04.572 spare 00:20:04.572 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.572 [2024-11-05 11:36:03.838667] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:04.572 11:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:05.953 "name": "raid_bdev1", 00:20:05.953 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:05.953 "strip_size_kb": 0, 00:20:05.953 "state": "online", 00:20:05.953 "raid_level": "raid1", 00:20:05.953 "superblock": true, 00:20:05.953 "num_base_bdevs": 2, 00:20:05.953 "num_base_bdevs_discovered": 2, 00:20:05.953 "num_base_bdevs_operational": 2, 00:20:05.953 "process": { 00:20:05.953 "type": "rebuild", 00:20:05.953 "target": "spare", 00:20:05.953 "progress": { 00:20:05.953 "blocks": 2560, 00:20:05.953 "percent": 32 00:20:05.953 } 00:20:05.953 }, 00:20:05.953 "base_bdevs_list": [ 00:20:05.953 { 00:20:05.953 "name": "spare", 00:20:05.953 "uuid": "927f9a44-7ac0-5444-abff-d2bbaf18d9af", 00:20:05.953 "is_configured": true, 00:20:05.953 "data_offset": 256, 00:20:05.953 "data_size": 7936 00:20:05.953 }, 00:20:05.953 { 00:20:05.953 "name": "BaseBdev2", 00:20:05.953 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:05.953 "is_configured": true, 00:20:05.953 "data_offset": 256, 00:20:05.953 "data_size": 7936 00:20:05.953 } 00:20:05.953 ] 00:20:05.953 }' 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.953 11:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.953 [2024-11-05 11:36:05.002867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:05.953 [2024-11-05 11:36:05.043322] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:05.953 [2024-11-05 11:36:05.043419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.953 [2024-11-05 11:36:05.043463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:05.953 [2024-11-05 11:36:05.043484] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.953 "name": "raid_bdev1", 00:20:05.953 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:05.953 "strip_size_kb": 0, 00:20:05.953 "state": "online", 00:20:05.953 "raid_level": "raid1", 00:20:05.953 "superblock": true, 00:20:05.953 "num_base_bdevs": 2, 00:20:05.953 "num_base_bdevs_discovered": 1, 00:20:05.953 "num_base_bdevs_operational": 1, 00:20:05.953 "base_bdevs_list": [ 00:20:05.953 { 00:20:05.953 "name": null, 00:20:05.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.953 "is_configured": false, 00:20:05.953 "data_offset": 0, 00:20:05.953 "data_size": 7936 00:20:05.953 }, 00:20:05.953 { 00:20:05.953 "name": "BaseBdev2", 00:20:05.953 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:05.953 "is_configured": true, 00:20:05.953 "data_offset": 256, 00:20:05.953 "data_size": 7936 00:20:05.953 } 00:20:05.953 ] 00:20:05.953 }' 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.953 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.523 "name": "raid_bdev1", 00:20:06.523 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:06.523 "strip_size_kb": 0, 00:20:06.523 "state": "online", 00:20:06.523 "raid_level": "raid1", 00:20:06.523 "superblock": true, 00:20:06.523 "num_base_bdevs": 2, 00:20:06.523 "num_base_bdevs_discovered": 1, 00:20:06.523 "num_base_bdevs_operational": 1, 00:20:06.523 "base_bdevs_list": [ 00:20:06.523 { 00:20:06.523 "name": null, 00:20:06.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.523 "is_configured": false, 00:20:06.523 "data_offset": 0, 00:20:06.523 "data_size": 7936 00:20:06.523 }, 00:20:06.523 { 00:20:06.523 "name": "BaseBdev2", 00:20:06.523 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:06.523 "is_configured": true, 00:20:06.523 "data_offset": 256, 00:20:06.523 "data_size": 7936 00:20:06.523 } 00:20:06.523 ] 00:20:06.523 }' 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.523 [2024-11-05 11:36:05.651140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:06.523 [2024-11-05 11:36:05.651283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.523 [2024-11-05 11:36:05.651322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:06.523 [2024-11-05 11:36:05.651350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.523 [2024-11-05 11:36:05.651508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.523 [2024-11-05 11:36:05.651549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:06.523 [2024-11-05 11:36:05.651617] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:06.523 [2024-11-05 11:36:05.651630] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:06.523 [2024-11-05 11:36:05.651639] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:06.523 [2024-11-05 11:36:05.651650] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:06.523 BaseBdev1 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.523 11:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.462 "name": "raid_bdev1", 00:20:07.462 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:07.462 "strip_size_kb": 0, 00:20:07.462 "state": "online", 00:20:07.462 "raid_level": "raid1", 00:20:07.462 "superblock": true, 00:20:07.462 "num_base_bdevs": 2, 00:20:07.462 "num_base_bdevs_discovered": 1, 00:20:07.462 "num_base_bdevs_operational": 1, 00:20:07.462 "base_bdevs_list": [ 00:20:07.462 { 00:20:07.462 "name": null, 00:20:07.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.462 "is_configured": false, 00:20:07.462 "data_offset": 0, 00:20:07.462 "data_size": 7936 00:20:07.462 }, 00:20:07.462 { 00:20:07.462 "name": "BaseBdev2", 00:20:07.462 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:07.462 "is_configured": true, 00:20:07.462 "data_offset": 256, 00:20:07.462 "data_size": 7936 00:20:07.462 } 00:20:07.462 ] 00:20:07.462 }' 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.462 11:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.030 "name": "raid_bdev1", 00:20:08.030 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:08.030 "strip_size_kb": 0, 00:20:08.030 "state": "online", 00:20:08.030 "raid_level": "raid1", 00:20:08.030 "superblock": true, 00:20:08.030 "num_base_bdevs": 2, 00:20:08.030 "num_base_bdevs_discovered": 1, 00:20:08.030 "num_base_bdevs_operational": 1, 00:20:08.030 "base_bdevs_list": [ 00:20:08.030 { 00:20:08.030 "name": null, 00:20:08.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.030 "is_configured": false, 00:20:08.030 "data_offset": 0, 00:20:08.030 "data_size": 7936 00:20:08.030 }, 00:20:08.030 { 00:20:08.030 "name": "BaseBdev2", 00:20:08.030 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:08.030 "is_configured": true, 00:20:08.030 "data_offset": 256, 00:20:08.030 "data_size": 7936 00:20:08.030 } 00:20:08.030 ] 00:20:08.030 }' 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.030 [2024-11-05 11:36:07.196467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:08.030 [2024-11-05 11:36:07.196632] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:08.030 [2024-11-05 11:36:07.196688] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:08.030 request: 00:20:08.030 { 00:20:08.030 "base_bdev": "BaseBdev1", 00:20:08.030 "raid_bdev": "raid_bdev1", 00:20:08.030 "method": "bdev_raid_add_base_bdev", 00:20:08.030 "req_id": 1 00:20:08.030 } 00:20:08.030 Got JSON-RPC error response 00:20:08.030 response: 00:20:08.030 { 00:20:08.030 "code": -22, 00:20:08.030 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:08.030 } 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:20:08.030 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.031 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.031 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.031 11:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:08.969 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:08.969 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.969 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.969 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.969 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.969 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:08.969 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.969 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.969 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.969 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.969 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.969 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.970 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.970 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.970 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.229 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.229 "name": "raid_bdev1", 00:20:09.229 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:09.229 "strip_size_kb": 0, 00:20:09.229 "state": "online", 00:20:09.229 "raid_level": "raid1", 00:20:09.229 "superblock": true, 00:20:09.229 "num_base_bdevs": 2, 00:20:09.229 "num_base_bdevs_discovered": 1, 00:20:09.229 "num_base_bdevs_operational": 1, 00:20:09.229 "base_bdevs_list": [ 00:20:09.229 { 00:20:09.229 "name": null, 00:20:09.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.229 "is_configured": false, 00:20:09.229 "data_offset": 0, 00:20:09.229 "data_size": 7936 00:20:09.229 }, 00:20:09.229 { 00:20:09.229 "name": "BaseBdev2", 00:20:09.229 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:09.229 "is_configured": true, 00:20:09.229 "data_offset": 256, 00:20:09.229 "data_size": 7936 00:20:09.229 } 00:20:09.229 ] 00:20:09.229 }' 00:20:09.229 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.229 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.489 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:09.489 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.489 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:09.489 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:09.489 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.489 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.489 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.489 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.489 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.489 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.489 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.489 "name": "raid_bdev1", 00:20:09.489 "uuid": "8a536fc4-be39-48e3-a0f0-212e153be195", 00:20:09.489 "strip_size_kb": 0, 00:20:09.489 "state": "online", 00:20:09.489 "raid_level": "raid1", 00:20:09.489 "superblock": true, 00:20:09.489 "num_base_bdevs": 2, 00:20:09.489 "num_base_bdevs_discovered": 1, 00:20:09.489 "num_base_bdevs_operational": 1, 00:20:09.489 "base_bdevs_list": [ 00:20:09.489 { 00:20:09.489 "name": null, 00:20:09.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.489 "is_configured": false, 00:20:09.489 "data_offset": 0, 00:20:09.489 "data_size": 7936 00:20:09.489 }, 00:20:09.489 { 00:20:09.489 "name": "BaseBdev2", 00:20:09.489 "uuid": "10121f51-25a8-50b7-bc25-e89fffe66265", 00:20:09.489 "is_configured": true, 00:20:09.489 "data_offset": 256, 00:20:09.489 "data_size": 7936 00:20:09.489 } 00:20:09.489 ] 00:20:09.489 }' 00:20:09.489 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.489 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:09.489 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.489 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:09.748 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89033 00:20:09.748 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89033 ']' 00:20:09.748 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89033 00:20:09.748 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:20:09.748 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:09.748 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89033 00:20:09.748 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:09.748 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:09.748 killing process with pid 89033 00:20:09.748 Received shutdown signal, test time was about 60.000000 seconds 00:20:09.748 00:20:09.748 Latency(us) 00:20:09.748 [2024-11-05T11:36:09.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.748 [2024-11-05T11:36:09.022Z] =================================================================================================================== 00:20:09.748 [2024-11-05T11:36:09.022Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:09.748 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89033' 00:20:09.748 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89033 00:20:09.748 [2024-11-05 11:36:08.800751] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:09.748 [2024-11-05 11:36:08.800846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.748 [2024-11-05 11:36:08.800882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.748 [2024-11-05 11:36:08.800892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:09.748 11:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89033 00:20:10.008 [2024-11-05 11:36:09.082001] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:10.945 11:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:20:10.945 00:20:10.945 real 0m17.271s 00:20:10.945 user 0m22.606s 00:20:10.945 sys 0m1.681s 00:20:10.945 11:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:10.945 ************************************ 00:20:10.945 END TEST raid_rebuild_test_sb_md_interleaved 00:20:10.945 ************************************ 00:20:10.945 11:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.945 11:36:10 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:20:10.945 11:36:10 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:20:10.945 11:36:10 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89033 ']' 00:20:10.945 11:36:10 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89033 00:20:10.945 11:36:10 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:20:10.945 ************************************ 00:20:10.945 END TEST bdev_raid 00:20:10.945 ************************************ 00:20:10.945 00:20:10.945 real 11m49.680s 00:20:10.945 user 16m2.441s 00:20:10.945 sys 1m52.390s 00:20:10.945 11:36:10 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:10.945 11:36:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:11.205 11:36:10 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:11.205 11:36:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:11.205 11:36:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:11.205 11:36:10 -- common/autotest_common.sh@10 -- # set +x 00:20:11.205 ************************************ 00:20:11.205 START TEST spdkcli_raid 00:20:11.205 ************************************ 00:20:11.205 11:36:10 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:11.205 * Looking for test storage... 00:20:11.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:11.205 11:36:10 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:11.205 11:36:10 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:20:11.205 11:36:10 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:11.205 11:36:10 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:11.205 11:36:10 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:11.205 11:36:10 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:11.205 11:36:10 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:11.205 11:36:10 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:11.205 11:36:10 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:11.467 11:36:10 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:20:11.467 11:36:10 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:11.467 11:36:10 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:11.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.467 --rc genhtml_branch_coverage=1 00:20:11.467 --rc genhtml_function_coverage=1 00:20:11.467 --rc genhtml_legend=1 00:20:11.467 --rc geninfo_all_blocks=1 00:20:11.467 --rc geninfo_unexecuted_blocks=1 00:20:11.467 00:20:11.467 ' 00:20:11.467 11:36:10 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:11.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.467 --rc genhtml_branch_coverage=1 00:20:11.467 --rc genhtml_function_coverage=1 00:20:11.467 --rc genhtml_legend=1 00:20:11.467 --rc geninfo_all_blocks=1 00:20:11.467 --rc geninfo_unexecuted_blocks=1 00:20:11.467 00:20:11.467 ' 00:20:11.468 11:36:10 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:11.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.468 --rc genhtml_branch_coverage=1 00:20:11.468 --rc genhtml_function_coverage=1 00:20:11.468 --rc genhtml_legend=1 00:20:11.468 --rc geninfo_all_blocks=1 00:20:11.468 --rc geninfo_unexecuted_blocks=1 00:20:11.468 00:20:11.468 ' 00:20:11.468 11:36:10 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:11.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.468 --rc genhtml_branch_coverage=1 00:20:11.468 --rc genhtml_function_coverage=1 00:20:11.468 --rc genhtml_legend=1 00:20:11.468 --rc geninfo_all_blocks=1 00:20:11.468 --rc geninfo_unexecuted_blocks=1 00:20:11.468 00:20:11.468 ' 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:11.468 11:36:10 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:20:11.468 11:36:10 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:11.468 11:36:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89716 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:11.468 11:36:10 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89716 00:20:11.468 11:36:10 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 89716 ']' 00:20:11.468 11:36:10 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.468 11:36:10 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:11.468 11:36:10 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.468 11:36:10 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:11.468 11:36:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:11.468 [2024-11-05 11:36:10.644282] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:20:11.468 [2024-11-05 11:36:10.644493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89716 ] 00:20:11.739 [2024-11-05 11:36:10.821030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:11.739 [2024-11-05 11:36:10.932857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.739 [2024-11-05 11:36:10.932868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.719 11:36:11 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:12.719 11:36:11 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:20:12.719 11:36:11 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:20:12.719 11:36:11 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:12.719 11:36:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:12.719 11:36:11 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:20:12.719 11:36:11 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:12.719 11:36:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:12.719 11:36:11 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:12.719 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:12.719 ' 00:20:14.099 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:20:14.099 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:20:14.358 11:36:13 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:20:14.358 11:36:13 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:14.358 11:36:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:14.358 11:36:13 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:20:14.358 11:36:13 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:14.358 11:36:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:14.358 11:36:13 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:20:14.358 ' 00:20:15.297 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:20:15.556 11:36:14 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:20:15.556 11:36:14 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:15.556 11:36:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:15.556 11:36:14 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:20:15.556 11:36:14 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:15.556 11:36:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:15.557 11:36:14 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:20:15.557 11:36:14 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:20:16.126 11:36:15 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:20:16.126 11:36:15 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:20:16.126 11:36:15 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:20:16.126 11:36:15 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:16.126 11:36:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:16.126 11:36:15 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:20:16.126 11:36:15 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:16.126 11:36:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:16.126 11:36:15 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:20:16.126 ' 00:20:17.064 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:20:17.324 11:36:16 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:20:17.324 11:36:16 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:17.324 11:36:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:17.324 11:36:16 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:20:17.324 11:36:16 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:17.324 11:36:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:17.324 11:36:16 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:20:17.324 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:20:17.324 ' 00:20:18.705 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:20:18.705 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:20:18.705 11:36:17 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:20:18.705 11:36:17 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:18.705 11:36:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:18.705 11:36:17 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89716 00:20:18.705 11:36:17 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 89716 ']' 00:20:18.705 11:36:17 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 89716 00:20:18.705 11:36:17 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:20:18.705 11:36:17 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:18.705 11:36:17 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89716 00:20:18.965 11:36:18 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:18.965 11:36:18 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:18.965 11:36:18 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89716' 00:20:18.965 killing process with pid 89716 00:20:18.965 11:36:18 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 89716 00:20:18.965 11:36:18 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 89716 00:20:21.503 11:36:20 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:20:21.503 11:36:20 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89716 ']' 00:20:21.503 11:36:20 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89716 00:20:21.503 11:36:20 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 89716 ']' 00:20:21.503 11:36:20 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 89716 00:20:21.503 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (89716) - No such process 00:20:21.503 11:36:20 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 89716 is not found' 00:20:21.503 Process with pid 89716 is not found 00:20:21.503 11:36:20 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:20:21.503 11:36:20 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:21.503 11:36:20 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:21.503 11:36:20 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:21.503 00:20:21.503 real 0m9.960s 00:20:21.503 user 0m20.484s 00:20:21.503 sys 0m1.154s 00:20:21.503 11:36:20 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:21.503 11:36:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:21.503 ************************************ 00:20:21.503 END TEST spdkcli_raid 00:20:21.503 ************************************ 00:20:21.503 11:36:20 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:21.503 11:36:20 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:21.503 11:36:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:21.503 11:36:20 -- common/autotest_common.sh@10 -- # set +x 00:20:21.503 ************************************ 00:20:21.503 START TEST blockdev_raid5f 00:20:21.503 ************************************ 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:21.504 * Looking for test storage... 00:20:21.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.504 11:36:20 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:21.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.504 --rc genhtml_branch_coverage=1 00:20:21.504 --rc genhtml_function_coverage=1 00:20:21.504 --rc genhtml_legend=1 00:20:21.504 --rc geninfo_all_blocks=1 00:20:21.504 --rc geninfo_unexecuted_blocks=1 00:20:21.504 00:20:21.504 ' 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:21.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.504 --rc genhtml_branch_coverage=1 00:20:21.504 --rc genhtml_function_coverage=1 00:20:21.504 --rc genhtml_legend=1 00:20:21.504 --rc geninfo_all_blocks=1 00:20:21.504 --rc geninfo_unexecuted_blocks=1 00:20:21.504 00:20:21.504 ' 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:21.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.504 --rc genhtml_branch_coverage=1 00:20:21.504 --rc genhtml_function_coverage=1 00:20:21.504 --rc genhtml_legend=1 00:20:21.504 --rc geninfo_all_blocks=1 00:20:21.504 --rc geninfo_unexecuted_blocks=1 00:20:21.504 00:20:21.504 ' 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:21.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.504 --rc genhtml_branch_coverage=1 00:20:21.504 --rc genhtml_function_coverage=1 00:20:21.504 --rc genhtml_legend=1 00:20:21.504 --rc geninfo_all_blocks=1 00:20:21.504 --rc geninfo_unexecuted_blocks=1 00:20:21.504 00:20:21.504 ' 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89985 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:21.504 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89985 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 89985 ']' 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:21.504 11:36:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:21.504 [2024-11-05 11:36:20.655627] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:20:21.504 [2024-11-05 11:36:20.655834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89985 ] 00:20:21.764 [2024-11-05 11:36:20.833955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.764 [2024-11-05 11:36:20.940607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:20:22.703 11:36:21 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:22.703 11:36:21 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:20:22.703 11:36:21 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:22.703 Malloc0 00:20:22.703 Malloc1 00:20:22.703 Malloc2 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.703 11:36:21 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.703 11:36:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:20:22.703 11:36:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.703 11:36:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.703 11:36:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:22.703 11:36:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.703 11:36:21 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:22.963 11:36:21 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:22.963 11:36:21 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:22.963 11:36:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.963 11:36:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:22.963 11:36:22 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.963 11:36:22 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:22.963 11:36:22 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:22.963 11:36:22 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "18149e7f-3c55-4312-9287-8310bf63d380"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "18149e7f-3c55-4312-9287-8310bf63d380",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "18149e7f-3c55-4312-9287-8310bf63d380",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "091af96e-e73b-415e-9bab-3065092e0986",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "bb426c55-2723-4d9d-9676-bd70384b845e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f1a51333-3221-4601-8bb5-2d11255152a0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:22.963 11:36:22 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:22.963 11:36:22 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:20:22.963 11:36:22 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:22.963 11:36:22 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89985 00:20:22.963 11:36:22 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 89985 ']' 00:20:22.963 11:36:22 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 89985 00:20:22.963 11:36:22 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:20:22.963 11:36:22 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:22.963 11:36:22 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89985 00:20:22.963 11:36:22 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:22.963 11:36:22 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:22.963 11:36:22 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89985' 00:20:22.963 killing process with pid 89985 00:20:22.963 11:36:22 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 89985 00:20:22.963 11:36:22 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 89985 00:20:25.513 11:36:24 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:25.513 11:36:24 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:25.513 11:36:24 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:25.513 11:36:24 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:25.513 11:36:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:25.513 ************************************ 00:20:25.513 START TEST bdev_hello_world 00:20:25.513 ************************************ 00:20:25.513 11:36:24 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:25.513 [2024-11-05 11:36:24.670872] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:20:25.513 [2024-11-05 11:36:24.671062] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90052 ] 00:20:25.773 [2024-11-05 11:36:24.841840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.773 [2024-11-05 11:36:24.948705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.343 [2024-11-05 11:36:25.455248] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:26.343 [2024-11-05 11:36:25.455295] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:20:26.343 [2024-11-05 11:36:25.455311] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:26.343 [2024-11-05 11:36:25.455759] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:26.343 [2024-11-05 11:36:25.455880] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:26.343 [2024-11-05 11:36:25.455894] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:26.343 [2024-11-05 11:36:25.455936] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:26.343 00:20:26.343 [2024-11-05 11:36:25.455952] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:27.726 00:20:27.726 real 0m2.160s 00:20:27.726 user 0m1.795s 00:20:27.726 sys 0m0.245s 00:20:27.726 ************************************ 00:20:27.726 END TEST bdev_hello_world 00:20:27.726 ************************************ 00:20:27.726 11:36:26 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:27.726 11:36:26 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:27.726 11:36:26 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:20:27.726 11:36:26 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:27.726 11:36:26 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:27.726 11:36:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:27.726 ************************************ 00:20:27.726 START TEST bdev_bounds 00:20:27.726 ************************************ 00:20:27.726 11:36:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:20:27.726 11:36:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90094 00:20:27.726 11:36:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:27.726 11:36:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:27.726 Process bdevio pid: 90094 00:20:27.726 11:36:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90094' 00:20:27.726 11:36:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90094 00:20:27.726 11:36:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 90094 ']' 00:20:27.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.726 11:36:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.726 11:36:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:27.726 11:36:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.726 11:36:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:27.726 11:36:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:27.726 [2024-11-05 11:36:26.909638] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:20:27.726 [2024-11-05 11:36:26.909867] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90094 ] 00:20:27.986 [2024-11-05 11:36:27.088765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:27.986 [2024-11-05 11:36:27.197305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.986 [2024-11-05 11:36:27.197534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.986 [2024-11-05 11:36:27.197542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.555 11:36:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:28.555 11:36:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:20:28.555 11:36:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:28.555 I/O targets: 00:20:28.555 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:20:28.555 00:20:28.555 00:20:28.555 CUnit - A unit testing framework for C - Version 2.1-3 00:20:28.555 http://cunit.sourceforge.net/ 00:20:28.555 00:20:28.555 00:20:28.555 Suite: bdevio tests on: raid5f 00:20:28.555 Test: blockdev write read block ...passed 00:20:28.555 Test: blockdev write zeroes read block ...passed 00:20:28.555 Test: blockdev write zeroes read no split ...passed 00:20:28.815 Test: blockdev write zeroes read split ...passed 00:20:28.815 Test: blockdev write zeroes read split partial ...passed 00:20:28.815 Test: blockdev reset ...passed 00:20:28.815 Test: blockdev write read 8 blocks ...passed 00:20:28.815 Test: blockdev write read size > 128k ...passed 00:20:28.815 Test: blockdev write read invalid size ...passed 00:20:28.815 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:28.815 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:28.815 Test: blockdev write read max offset ...passed 00:20:28.815 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:28.815 Test: blockdev writev readv 8 blocks ...passed 00:20:28.815 Test: blockdev writev readv 30 x 1block ...passed 00:20:28.815 Test: blockdev writev readv block ...passed 00:20:28.815 Test: blockdev writev readv size > 128k ...passed 00:20:28.815 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:28.815 Test: blockdev comparev and writev ...passed 00:20:28.815 Test: blockdev nvme passthru rw ...passed 00:20:28.815 Test: blockdev nvme passthru vendor specific ...passed 00:20:28.815 Test: blockdev nvme admin passthru ...passed 00:20:28.815 Test: blockdev copy ...passed 00:20:28.815 00:20:28.815 Run Summary: Type Total Ran Passed Failed Inactive 00:20:28.815 suites 1 1 n/a 0 0 00:20:28.815 tests 23 23 23 0 0 00:20:28.815 asserts 130 130 130 0 n/a 00:20:28.815 00:20:28.815 Elapsed time = 0.564 seconds 00:20:28.815 0 00:20:28.815 11:36:28 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90094 00:20:28.815 11:36:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 90094 ']' 00:20:28.815 11:36:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 90094 00:20:28.815 11:36:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:20:28.815 11:36:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:28.815 11:36:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90094 00:20:29.075 11:36:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:29.075 11:36:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:29.075 11:36:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90094' 00:20:29.075 killing process with pid 90094 00:20:29.075 11:36:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 90094 00:20:29.075 11:36:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 90094 00:20:30.457 11:36:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:30.457 00:20:30.457 real 0m2.605s 00:20:30.457 user 0m6.418s 00:20:30.457 sys 0m0.365s 00:20:30.457 11:36:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:30.457 11:36:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:30.457 ************************************ 00:20:30.457 END TEST bdev_bounds 00:20:30.457 ************************************ 00:20:30.457 11:36:29 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:30.457 11:36:29 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:30.457 11:36:29 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:30.457 11:36:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:30.457 ************************************ 00:20:30.457 START TEST bdev_nbd 00:20:30.457 ************************************ 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90154 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90154 /var/tmp/spdk-nbd.sock 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 90154 ']' 00:20:30.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:30.457 11:36:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:30.457 [2024-11-05 11:36:29.610736] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:20:30.457 [2024-11-05 11:36:29.610970] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.717 [2024-11-05 11:36:29.790364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.717 [2024-11-05 11:36:29.897330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.286 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:31.286 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:20:31.286 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:31.286 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:31.286 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:31.286 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:31.286 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:31.286 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:31.286 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:31.287 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:31.287 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:31.287 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:31.287 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:31.287 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:31.287 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:31.547 1+0 records in 00:20:31.547 1+0 records out 00:20:31.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398497 s, 10.3 MB/s 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:31.547 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:31.808 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:31.808 { 00:20:31.808 "nbd_device": "/dev/nbd0", 00:20:31.808 "bdev_name": "raid5f" 00:20:31.808 } 00:20:31.808 ]' 00:20:31.808 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:31.808 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:31.808 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:31.808 { 00:20:31.808 "nbd_device": "/dev/nbd0", 00:20:31.808 "bdev_name": "raid5f" 00:20:31.808 } 00:20:31.808 ]' 00:20:31.808 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:31.808 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:31.808 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:31.808 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:31.808 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:31.808 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:31.808 11:36:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:32.069 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:32.069 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:32.069 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:32.069 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:32.069 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:32.069 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:32.069 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:32.069 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:32.069 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:32.069 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:32.069 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:32.330 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:32.590 /dev/nbd0 00:20:32.590 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:32.590 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:32.590 11:36:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:32.590 11:36:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:32.590 11:36:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:32.590 11:36:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:32.590 11:36:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:32.590 11:36:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:32.590 11:36:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:32.591 11:36:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:32.591 11:36:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:32.591 1+0 records in 00:20:32.591 1+0 records out 00:20:32.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603898 s, 6.8 MB/s 00:20:32.591 11:36:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:32.591 11:36:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:32.591 11:36:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:32.591 11:36:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:32.591 11:36:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:32.591 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:32.591 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:32.591 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:32.591 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:32.591 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:32.851 { 00:20:32.851 "nbd_device": "/dev/nbd0", 00:20:32.851 "bdev_name": "raid5f" 00:20:32.851 } 00:20:32.851 ]' 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:32.851 { 00:20:32.851 "nbd_device": "/dev/nbd0", 00:20:32.851 "bdev_name": "raid5f" 00:20:32.851 } 00:20:32.851 ]' 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:32.851 256+0 records in 00:20:32.851 256+0 records out 00:20:32.851 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131445 s, 79.8 MB/s 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:32.851 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:32.851 256+0 records in 00:20:32.851 256+0 records out 00:20:32.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307445 s, 34.1 MB/s 00:20:32.852 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:32.852 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:32.852 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:32.852 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:32.852 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:32.852 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:32.852 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:32.852 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:32.852 11:36:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:32.852 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:32.852 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:32.852 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:32.852 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:32.852 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:32.852 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:32.852 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:32.852 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:33.112 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:33.112 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:33.112 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:33.112 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:33.112 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:33.112 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:33.112 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:33.112 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:33.112 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:33.112 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:33.112 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:33.372 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:33.633 malloc_lvol_verify 00:20:33.633 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:33.633 385a65ff-8aa4-40da-8be5-ef3a0095debc 00:20:33.633 11:36:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:33.893 c79a0575-366f-4c4d-8846-8c22a5ab8560 00:20:33.893 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:34.153 /dev/nbd0 00:20:34.153 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:34.153 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:34.153 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:34.153 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:34.153 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:34.153 mke2fs 1.47.0 (5-Feb-2023) 00:20:34.153 Discarding device blocks: 0/4096 done 00:20:34.153 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:34.153 00:20:34.153 Allocating group tables: 0/1 done 00:20:34.153 Writing inode tables: 0/1 done 00:20:34.153 Creating journal (1024 blocks): done 00:20:34.153 Writing superblocks and filesystem accounting information: 0/1 done 00:20:34.153 00:20:34.153 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:34.153 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:34.153 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:34.153 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:34.153 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:34.153 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:34.153 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90154 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 90154 ']' 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 90154 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90154 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90154' 00:20:34.414 killing process with pid 90154 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 90154 00:20:34.414 11:36:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 90154 00:20:35.799 ************************************ 00:20:35.799 END TEST bdev_nbd 00:20:35.799 ************************************ 00:20:35.799 11:36:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:35.799 00:20:35.799 real 0m5.433s 00:20:35.799 user 0m7.320s 00:20:35.799 sys 0m1.300s 00:20:35.799 11:36:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:35.799 11:36:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:35.799 11:36:34 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:35.799 11:36:34 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:20:35.799 11:36:34 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:20:35.799 11:36:34 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:20:35.799 11:36:34 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:35.799 11:36:34 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:35.799 11:36:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:35.799 ************************************ 00:20:35.799 START TEST bdev_fio 00:20:35.799 ************************************ 00:20:35.799 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:20:35.799 11:36:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:35.799 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:35.799 11:36:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:35.799 11:36:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:35.799 11:36:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:35.799 11:36:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:35.799 11:36:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:35.799 11:36:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:35.799 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:35.799 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:20:35.799 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:20:35.799 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:20:35.799 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:20:35.799 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:35.799 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:20:35.800 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:20:35.800 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:35.800 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:20:35.800 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:20:35.800 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:20:35.800 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:20:35.800 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:36.061 ************************************ 00:20:36.061 START TEST bdev_fio_rw_verify 00:20:36.061 ************************************ 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:36.061 11:36:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:36.322 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:36.322 fio-3.35 00:20:36.322 Starting 1 thread 00:20:48.608 00:20:48.608 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90352: Tue Nov 5 11:36:46 2024 00:20:48.608 read: IOPS=12.5k, BW=49.0MiB/s (51.4MB/s)(490MiB/10001msec) 00:20:48.608 slat (nsec): min=16731, max=69615, avg=18662.31, stdev=1945.86 00:20:48.608 clat (usec): min=10, max=303, avg=127.41, stdev=44.24 00:20:48.608 lat (usec): min=29, max=331, avg=146.07, stdev=44.43 00:20:48.608 clat percentiles (usec): 00:20:48.608 | 50.000th=[ 131], 99.000th=[ 210], 99.900th=[ 231], 99.990th=[ 265], 00:20:48.608 | 99.999th=[ 293] 00:20:48.608 write: IOPS=13.1k, BW=51.2MiB/s (53.7MB/s)(506MiB/9877msec); 0 zone resets 00:20:48.608 slat (usec): min=7, max=299, avg=16.06, stdev= 3.74 00:20:48.608 clat (usec): min=56, max=1053, avg=295.53, stdev=38.63 00:20:48.608 lat (usec): min=71, max=1258, avg=311.60, stdev=39.47 00:20:48.608 clat percentiles (usec): 00:20:48.608 | 50.000th=[ 302], 99.000th=[ 367], 99.900th=[ 553], 99.990th=[ 930], 00:20:48.608 | 99.999th=[ 1012] 00:20:48.608 bw ( KiB/s): min=48680, max=55008, per=98.79%, avg=51788.74, stdev=1668.77, samples=19 00:20:48.608 iops : min=12170, max=13752, avg=12947.16, stdev=417.21, samples=19 00:20:48.608 lat (usec) : 20=0.01%, 50=0.01%, 100=16.37%, 250=38.99%, 500=44.57% 00:20:48.608 lat (usec) : 750=0.04%, 1000=0.02% 00:20:48.608 lat (msec) : 2=0.01% 00:20:48.608 cpu : usr=98.88%, sys=0.46%, ctx=22, majf=0, minf=10226 00:20:48.608 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:48.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.608 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.608 issued rwts: total=125451,129443,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.608 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:48.608 00:20:48.608 Run status group 0 (all jobs): 00:20:48.608 READ: bw=49.0MiB/s (51.4MB/s), 49.0MiB/s-49.0MiB/s (51.4MB/s-51.4MB/s), io=490MiB (514MB), run=10001-10001msec 00:20:48.608 WRITE: bw=51.2MiB/s (53.7MB/s), 51.2MiB/s-51.2MiB/s (53.7MB/s-53.7MB/s), io=506MiB (530MB), run=9877-9877msec 00:20:48.608 ----------------------------------------------------- 00:20:48.608 Suppressions used: 00:20:48.608 count bytes template 00:20:48.608 1 7 /usr/src/fio/parse.c 00:20:48.608 107 10272 /usr/src/fio/iolog.c 00:20:48.608 1 8 libtcmalloc_minimal.so 00:20:48.608 1 904 libcrypto.so 00:20:48.608 ----------------------------------------------------- 00:20:48.608 00:20:48.872 00:20:48.872 real 0m12.714s 00:20:48.872 user 0m12.973s 00:20:48.872 sys 0m0.685s 00:20:48.872 ************************************ 00:20:48.872 END TEST bdev_fio_rw_verify 00:20:48.872 ************************************ 00:20:48.872 11:36:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "18149e7f-3c55-4312-9287-8310bf63d380"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "18149e7f-3c55-4312-9287-8310bf63d380",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "18149e7f-3c55-4312-9287-8310bf63d380",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "091af96e-e73b-415e-9bab-3065092e0986",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "bb426c55-2723-4d9d-9676-bd70384b845e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f1a51333-3221-4601-8bb5-2d11255152a0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:48.873 11:36:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:48.873 11:36:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:48.873 11:36:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:48.873 11:36:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:48.873 /home/vagrant/spdk_repo/spdk 00:20:48.873 11:36:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:48.873 11:36:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:48.873 00:20:48.873 real 0m13.032s 00:20:48.873 user 0m13.108s 00:20:48.873 sys 0m0.832s 00:20:48.873 11:36:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:48.873 11:36:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:48.873 ************************************ 00:20:48.873 END TEST bdev_fio 00:20:48.873 ************************************ 00:20:48.873 11:36:48 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:48.873 11:36:48 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:48.873 11:36:48 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:20:48.873 11:36:48 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:48.873 11:36:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:48.873 ************************************ 00:20:48.873 START TEST bdev_verify 00:20:48.873 ************************************ 00:20:48.873 11:36:48 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:49.133 [2024-11-05 11:36:48.207738] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:20:49.133 [2024-11-05 11:36:48.207930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90516 ] 00:20:49.133 [2024-11-05 11:36:48.383737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:49.392 [2024-11-05 11:36:48.491149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.392 [2024-11-05 11:36:48.491260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.960 Running I/O for 5 seconds... 00:20:51.831 10949.00 IOPS, 42.77 MiB/s [2024-11-05T11:36:52.041Z] 11011.50 IOPS, 43.01 MiB/s [2024-11-05T11:36:53.418Z] 10997.33 IOPS, 42.96 MiB/s [2024-11-05T11:36:54.353Z] 11016.50 IOPS, 43.03 MiB/s [2024-11-05T11:36:54.353Z] 11047.40 IOPS, 43.15 MiB/s 00:20:55.079 Latency(us) 00:20:55.079 [2024-11-05T11:36:54.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.080 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:55.080 Verification LBA range: start 0x0 length 0x2000 00:20:55.080 raid5f : 5.01 4351.37 17.00 0.00 0.00 44310.85 168.13 31594.65 00:20:55.080 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:55.080 Verification LBA range: start 0x2000 length 0x2000 00:20:55.080 raid5f : 5.02 6670.27 26.06 0.00 0.00 28914.44 248.62 21406.52 00:20:55.080 [2024-11-05T11:36:54.354Z] =================================================================================================================== 00:20:55.080 [2024-11-05T11:36:54.354Z] Total : 11021.63 43.05 0.00 0.00 34991.90 168.13 31594.65 00:20:56.457 ************************************ 00:20:56.457 END TEST bdev_verify 00:20:56.457 ************************************ 00:20:56.457 00:20:56.457 real 0m7.224s 00:20:56.457 user 0m13.382s 00:20:56.457 sys 0m0.254s 00:20:56.457 11:36:55 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:56.457 11:36:55 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:56.457 11:36:55 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:56.457 11:36:55 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:20:56.457 11:36:55 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:56.457 11:36:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:56.457 ************************************ 00:20:56.457 START TEST bdev_verify_big_io 00:20:56.457 ************************************ 00:20:56.457 11:36:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:56.457 [2024-11-05 11:36:55.501179] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:20:56.457 [2024-11-05 11:36:55.501366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90614 ] 00:20:56.457 [2024-11-05 11:36:55.679983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:56.717 [2024-11-05 11:36:55.788853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.717 [2024-11-05 11:36:55.788881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.285 Running I/O for 5 seconds... 00:20:59.599 695.00 IOPS, 43.44 MiB/s [2024-11-05T11:36:59.822Z] 792.00 IOPS, 49.50 MiB/s [2024-11-05T11:37:00.760Z] 803.67 IOPS, 50.23 MiB/s [2024-11-05T11:37:01.696Z] 824.50 IOPS, 51.53 MiB/s [2024-11-05T11:37:01.696Z] 812.60 IOPS, 50.79 MiB/s 00:21:02.422 Latency(us) 00:21:02.422 [2024-11-05T11:37:01.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.422 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:02.422 Verification LBA range: start 0x0 length 0x200 00:21:02.422 raid5f : 5.18 343.45 21.47 0.00 0.00 9246631.02 386.35 395619.94 00:21:02.422 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:02.422 Verification LBA range: start 0x200 length 0x200 00:21:02.422 raid5f : 5.24 459.72 28.73 0.00 0.00 6989573.92 155.61 304041.25 00:21:02.422 [2024-11-05T11:37:01.696Z] =================================================================================================================== 00:21:02.422 [2024-11-05T11:37:01.696Z] Total : 803.17 50.20 0.00 0.00 7947570.47 155.61 395619.94 00:21:03.800 ************************************ 00:21:03.800 END TEST bdev_verify_big_io 00:21:03.800 ************************************ 00:21:03.800 00:21:03.800 real 0m7.461s 00:21:03.800 user 0m13.826s 00:21:03.800 sys 0m0.275s 00:21:03.800 11:37:02 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:03.800 11:37:02 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:03.800 11:37:02 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:03.800 11:37:02 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:21:03.800 11:37:02 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:03.800 11:37:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:03.800 ************************************ 00:21:03.800 START TEST bdev_write_zeroes 00:21:03.800 ************************************ 00:21:03.800 11:37:02 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:03.800 [2024-11-05 11:37:03.034177] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:21:03.800 [2024-11-05 11:37:03.034287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90707 ] 00:21:04.058 [2024-11-05 11:37:03.207938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.058 [2024-11-05 11:37:03.318016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.625 Running I/O for 1 seconds... 00:21:05.561 30231.00 IOPS, 118.09 MiB/s 00:21:05.561 Latency(us) 00:21:05.561 [2024-11-05T11:37:04.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.561 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:05.561 raid5f : 1.01 30215.28 118.03 0.00 0.00 4224.51 1216.28 5723.67 00:21:05.561 [2024-11-05T11:37:04.835Z] =================================================================================================================== 00:21:05.561 [2024-11-05T11:37:04.835Z] Total : 30215.28 118.03 0.00 0.00 4224.51 1216.28 5723.67 00:21:06.939 00:21:06.939 real 0m3.167s 00:21:06.939 user 0m2.792s 00:21:06.939 sys 0m0.250s 00:21:06.939 ************************************ 00:21:06.939 END TEST bdev_write_zeroes 00:21:06.939 ************************************ 00:21:06.939 11:37:06 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:06.939 11:37:06 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:06.939 11:37:06 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:06.939 11:37:06 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:21:06.939 11:37:06 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:06.939 11:37:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:06.939 ************************************ 00:21:06.939 START TEST bdev_json_nonenclosed 00:21:06.939 ************************************ 00:21:06.939 11:37:06 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:07.198 [2024-11-05 11:37:06.276297] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:21:07.198 [2024-11-05 11:37:06.276437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90766 ] 00:21:07.198 [2024-11-05 11:37:06.453235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.457 [2024-11-05 11:37:06.555540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.457 [2024-11-05 11:37:06.555714] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:07.457 [2024-11-05 11:37:06.555745] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:07.457 [2024-11-05 11:37:06.555755] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:07.717 00:21:07.717 real 0m0.603s 00:21:07.717 user 0m0.365s 00:21:07.717 sys 0m0.134s 00:21:07.717 ************************************ 00:21:07.717 END TEST bdev_json_nonenclosed 00:21:07.717 ************************************ 00:21:07.717 11:37:06 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:07.717 11:37:06 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:07.717 11:37:06 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:07.717 11:37:06 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:21:07.717 11:37:06 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:07.717 11:37:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:07.717 ************************************ 00:21:07.717 START TEST bdev_json_nonarray 00:21:07.717 ************************************ 00:21:07.717 11:37:06 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:07.717 [2024-11-05 11:37:06.953534] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:21:07.717 [2024-11-05 11:37:06.953732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90791 ] 00:21:07.977 [2024-11-05 11:37:07.130623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.977 [2024-11-05 11:37:07.238138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.977 [2024-11-05 11:37:07.238296] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:07.977 [2024-11-05 11:37:07.238348] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:07.977 [2024-11-05 11:37:07.238380] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:08.237 00:21:08.237 real 0m0.612s 00:21:08.237 user 0m0.368s 00:21:08.237 sys 0m0.139s 00:21:08.237 11:37:07 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:08.237 ************************************ 00:21:08.237 END TEST bdev_json_nonarray 00:21:08.237 ************************************ 00:21:08.237 11:37:07 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:08.498 11:37:07 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:21:08.498 11:37:07 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:21:08.498 11:37:07 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:21:08.498 11:37:07 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:21:08.498 11:37:07 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:21:08.498 11:37:07 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:08.498 11:37:07 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:08.498 11:37:07 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:21:08.498 11:37:07 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:21:08.498 11:37:07 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:21:08.498 11:37:07 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:21:08.498 00:21:08.498 real 0m47.247s 00:21:08.498 user 1m3.699s 00:21:08.498 sys 0m4.912s 00:21:08.498 11:37:07 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:08.498 ************************************ 00:21:08.498 END TEST blockdev_raid5f 00:21:08.498 ************************************ 00:21:08.498 11:37:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:08.498 11:37:07 -- spdk/autotest.sh@194 -- # uname -s 00:21:08.498 11:37:07 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:21:08.498 11:37:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:08.498 11:37:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:08.498 11:37:07 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@256 -- # timing_exit lib 00:21:08.498 11:37:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:08.498 11:37:07 -- common/autotest_common.sh@10 -- # set +x 00:21:08.498 11:37:07 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:08.498 11:37:07 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:21:08.498 11:37:07 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:08.498 11:37:07 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:08.498 11:37:07 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:21:08.498 11:37:07 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:21:08.498 11:37:07 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:21:08.498 11:37:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:08.498 11:37:07 -- common/autotest_common.sh@10 -- # set +x 00:21:08.498 11:37:07 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:21:08.498 11:37:07 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:21:08.498 11:37:07 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:21:08.498 11:37:07 -- common/autotest_common.sh@10 -- # set +x 00:21:11.038 INFO: APP EXITING 00:21:11.038 INFO: killing all VMs 00:21:11.038 INFO: killing vhost app 00:21:11.038 INFO: EXIT DONE 00:21:11.299 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:11.559 Waiting for block devices as requested 00:21:11.559 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:11.559 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:12.500 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:12.500 Cleaning 00:21:12.500 Removing: /var/run/dpdk/spdk0/config 00:21:12.500 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:12.500 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:12.500 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:12.500 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:12.500 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:12.500 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:12.500 Removing: /dev/shm/spdk_tgt_trace.pid57034 00:21:12.762 Removing: /var/run/dpdk/spdk0 00:21:12.762 Removing: /var/run/dpdk/spdk_pid56782 00:21:12.762 Removing: /var/run/dpdk/spdk_pid57034 00:21:12.762 Removing: /var/run/dpdk/spdk_pid57269 00:21:12.762 Removing: /var/run/dpdk/spdk_pid57383 00:21:12.762 Removing: /var/run/dpdk/spdk_pid57429 00:21:12.762 Removing: /var/run/dpdk/spdk_pid57568 00:21:12.762 Removing: /var/run/dpdk/spdk_pid57586 00:21:12.762 Removing: /var/run/dpdk/spdk_pid57796 00:21:12.762 Removing: /var/run/dpdk/spdk_pid57902 00:21:12.762 Removing: /var/run/dpdk/spdk_pid58015 00:21:12.762 Removing: /var/run/dpdk/spdk_pid58137 00:21:12.762 Removing: /var/run/dpdk/spdk_pid58250 00:21:12.762 Removing: /var/run/dpdk/spdk_pid58290 00:21:12.762 Removing: /var/run/dpdk/spdk_pid58332 00:21:12.762 Removing: /var/run/dpdk/spdk_pid58402 00:21:12.762 Removing: /var/run/dpdk/spdk_pid58525 00:21:12.762 Removing: /var/run/dpdk/spdk_pid58985 00:21:12.762 Removing: /var/run/dpdk/spdk_pid59066 00:21:12.762 Removing: /var/run/dpdk/spdk_pid59140 00:21:12.762 Removing: /var/run/dpdk/spdk_pid59158 00:21:12.762 Removing: /var/run/dpdk/spdk_pid59307 00:21:12.762 Removing: /var/run/dpdk/spdk_pid59329 00:21:12.762 Removing: /var/run/dpdk/spdk_pid59477 00:21:12.762 Removing: /var/run/dpdk/spdk_pid59493 00:21:12.762 Removing: /var/run/dpdk/spdk_pid59568 00:21:12.762 Removing: /var/run/dpdk/spdk_pid59586 00:21:12.762 Removing: /var/run/dpdk/spdk_pid59652 00:21:12.762 Removing: /var/run/dpdk/spdk_pid59674 00:21:12.762 Removing: /var/run/dpdk/spdk_pid59876 00:21:12.762 Removing: /var/run/dpdk/spdk_pid59907 00:21:12.762 Removing: /var/run/dpdk/spdk_pid59996 00:21:12.762 Removing: /var/run/dpdk/spdk_pid61345 00:21:12.762 Removing: /var/run/dpdk/spdk_pid61551 00:21:12.762 Removing: /var/run/dpdk/spdk_pid61691 00:21:12.762 Removing: /var/run/dpdk/spdk_pid62340 00:21:12.762 Removing: /var/run/dpdk/spdk_pid62546 00:21:12.762 Removing: /var/run/dpdk/spdk_pid62697 00:21:12.762 Removing: /var/run/dpdk/spdk_pid63335 00:21:12.762 Removing: /var/run/dpdk/spdk_pid63665 00:21:12.762 Removing: /var/run/dpdk/spdk_pid63811 00:21:12.762 Removing: /var/run/dpdk/spdk_pid65196 00:21:12.762 Removing: /var/run/dpdk/spdk_pid65450 00:21:12.762 Removing: /var/run/dpdk/spdk_pid65601 00:21:12.762 Removing: /var/run/dpdk/spdk_pid66986 00:21:12.762 Removing: /var/run/dpdk/spdk_pid67239 00:21:12.762 Removing: /var/run/dpdk/spdk_pid67385 00:21:12.762 Removing: /var/run/dpdk/spdk_pid68781 00:21:12.762 Removing: /var/run/dpdk/spdk_pid69227 00:21:12.762 Removing: /var/run/dpdk/spdk_pid69367 00:21:12.762 Removing: /var/run/dpdk/spdk_pid70849 00:21:12.762 Removing: /var/run/dpdk/spdk_pid71111 00:21:12.762 Removing: /var/run/dpdk/spdk_pid71262 00:21:12.762 Removing: /var/run/dpdk/spdk_pid72745 00:21:13.023 Removing: /var/run/dpdk/spdk_pid73005 00:21:13.023 Removing: /var/run/dpdk/spdk_pid73145 00:21:13.023 Removing: /var/run/dpdk/spdk_pid74631 00:21:13.023 Removing: /var/run/dpdk/spdk_pid75118 00:21:13.023 Removing: /var/run/dpdk/spdk_pid75264 00:21:13.023 Removing: /var/run/dpdk/spdk_pid75407 00:21:13.023 Removing: /var/run/dpdk/spdk_pid75821 00:21:13.023 Removing: /var/run/dpdk/spdk_pid76549 00:21:13.023 Removing: /var/run/dpdk/spdk_pid76925 00:21:13.023 Removing: /var/run/dpdk/spdk_pid77615 00:21:13.023 Removing: /var/run/dpdk/spdk_pid78061 00:21:13.023 Removing: /var/run/dpdk/spdk_pid78811 00:21:13.023 Removing: /var/run/dpdk/spdk_pid79209 00:21:13.023 Removing: /var/run/dpdk/spdk_pid81171 00:21:13.023 Removing: /var/run/dpdk/spdk_pid81609 00:21:13.023 Removing: /var/run/dpdk/spdk_pid82045 00:21:13.023 Removing: /var/run/dpdk/spdk_pid84137 00:21:13.023 Removing: /var/run/dpdk/spdk_pid84624 00:21:13.023 Removing: /var/run/dpdk/spdk_pid85127 00:21:13.023 Removing: /var/run/dpdk/spdk_pid86186 00:21:13.023 Removing: /var/run/dpdk/spdk_pid86509 00:21:13.023 Removing: /var/run/dpdk/spdk_pid87446 00:21:13.023 Removing: /var/run/dpdk/spdk_pid87773 00:21:13.023 Removing: /var/run/dpdk/spdk_pid88710 00:21:13.023 Removing: /var/run/dpdk/spdk_pid89033 00:21:13.023 Removing: /var/run/dpdk/spdk_pid89716 00:21:13.023 Removing: /var/run/dpdk/spdk_pid89985 00:21:13.023 Removing: /var/run/dpdk/spdk_pid90052 00:21:13.023 Removing: /var/run/dpdk/spdk_pid90094 00:21:13.023 Removing: /var/run/dpdk/spdk_pid90338 00:21:13.023 Removing: /var/run/dpdk/spdk_pid90516 00:21:13.023 Removing: /var/run/dpdk/spdk_pid90614 00:21:13.023 Removing: /var/run/dpdk/spdk_pid90707 00:21:13.023 Removing: /var/run/dpdk/spdk_pid90766 00:21:13.023 Removing: /var/run/dpdk/spdk_pid90791 00:21:13.023 Clean 00:21:13.023 11:37:12 -- common/autotest_common.sh@1451 -- # return 0 00:21:13.023 11:37:12 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:21:13.023 11:37:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:13.023 11:37:12 -- common/autotest_common.sh@10 -- # set +x 00:21:13.284 11:37:12 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:21:13.284 11:37:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:13.284 11:37:12 -- common/autotest_common.sh@10 -- # set +x 00:21:13.284 11:37:12 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:13.284 11:37:12 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:13.284 11:37:12 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:13.284 11:37:12 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:21:13.284 11:37:12 -- spdk/autotest.sh@394 -- # hostname 00:21:13.284 11:37:12 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:13.544 geninfo: WARNING: invalid characters removed from testname! 00:21:35.541 11:37:31 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:35.541 11:37:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:38.083 11:37:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:40.634 11:37:39 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:42.544 11:37:41 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:45.085 11:37:43 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:46.995 11:37:46 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:46.995 11:37:46 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:46.995 11:37:46 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:46.995 11:37:46 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:46.995 11:37:46 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:46.995 11:37:46 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:46.995 + [[ -n 5423 ]] 00:21:46.995 + sudo kill 5423 00:21:47.005 [Pipeline] } 00:21:47.022 [Pipeline] // timeout 00:21:47.027 [Pipeline] } 00:21:47.042 [Pipeline] // stage 00:21:47.047 [Pipeline] } 00:21:47.061 [Pipeline] // catchError 00:21:47.070 [Pipeline] stage 00:21:47.072 [Pipeline] { (Stop VM) 00:21:47.085 [Pipeline] sh 00:21:47.372 + vagrant halt 00:21:49.912 ==> default: Halting domain... 00:21:58.062 [Pipeline] sh 00:21:58.347 + vagrant destroy -f 00:22:00.887 ==> default: Removing domain... 00:22:00.901 [Pipeline] sh 00:22:01.186 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:22:01.196 [Pipeline] } 00:22:01.212 [Pipeline] // stage 00:22:01.217 [Pipeline] } 00:22:01.232 [Pipeline] // dir 00:22:01.237 [Pipeline] } 00:22:01.252 [Pipeline] // wrap 00:22:01.259 [Pipeline] } 00:22:01.271 [Pipeline] // catchError 00:22:01.281 [Pipeline] stage 00:22:01.283 [Pipeline] { (Epilogue) 00:22:01.297 [Pipeline] sh 00:22:01.582 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:05.825 [Pipeline] catchError 00:22:05.827 [Pipeline] { 00:22:05.840 [Pipeline] sh 00:22:06.148 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:06.148 Artifacts sizes are good 00:22:06.157 [Pipeline] } 00:22:06.172 [Pipeline] // catchError 00:22:06.183 [Pipeline] archiveArtifacts 00:22:06.190 Archiving artifacts 00:22:06.287 [Pipeline] cleanWs 00:22:06.299 [WS-CLEANUP] Deleting project workspace... 00:22:06.299 [WS-CLEANUP] Deferred wipeout is used... 00:22:06.306 [WS-CLEANUP] done 00:22:06.308 [Pipeline] } 00:22:06.323 [Pipeline] // stage 00:22:06.329 [Pipeline] } 00:22:06.343 [Pipeline] // node 00:22:06.349 [Pipeline] End of Pipeline 00:22:06.386 Finished: SUCCESS